id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
29,212,291
https://en.wikipedia.org/wiki/PKD%20domain
PKD (Polycystic Kidney Disease) domain was first identified in the polycystic kidney disease protein, polycystin-1 (PKD1 gene), and contains an Ig-like fold consisting of a beta-sandwich of seven strands in two sheets with a Greek key topology, although some members have additional strands. Polycystin-1 is a large cell-surface glycoprotein involved in adhesive protein–protein and protein–carbohydrate interactions; however it is not clear if the PKD domain mediates any of these interactions. PKD domains are also found in other proteins, usually in the extracellular parts of proteins involved in interactions with other proteins. For example, domains with a PKD-type fold are found in archaeal S-layer proteins that protect the cell from extreme environments, and in the human receptor SorCS2. Human proteins containing this domain GPNMB; PKD1; PKD1L1; PMEL; SORCS1; SORCS2; SORCS3 References Protein domains
PKD domain
[ "Biology" ]
231
[ "Protein domains", "Protein classification" ]
29,213,543
https://en.wikipedia.org/wiki/ESPRESSO
ESPRESSO (Echelle Spectrograph for Rocky Exoplanet- and Stable Spectroscopic Observations) is a third-generation, fiber fed, cross-dispersed, echelle spectrograph mounted on the European Southern Observatory's Very Large Telescope (VLT). The unit saw its first light with one VLT in December 2017 and first light with all four VLT units in February 2018. ESPRESSO is the successor of a line of echelle spectrometers that include CORAVEL, Elodie, Coralie, and HARPS. It measures changes in the light spectrum with great sensitivity, and is being used to search for Earth-size rocky exoplanets via the radial velocity method. For example, Earth induces a radial-velocity variation of 9 cm/s on the Sun; this gravitational "wobble" causes minute variations in the color of sunlight, invisible to the human eye but detectable by the instrument. The telescope light is fed to the instrument, located in the VLT Combined-Coude Laboratory 70 meters away from the telescope, where the light from up to four unit telescopes of the VLT can be combined. Sensitivity ESPRESSO builds on the foundations laid by the High Accuracy Radial Velocity Planet Searcher (HARPS) instrument at the 3.6-metre telescope at ESO's La Silla Observatory. ESPRESSO benefits not only from the much larger combined light-collecting capacity of the four 8.2-metre VLT Unit Telescopes, but also from improvements in the stability and calibration accuracy that are now possible by laser frequency comb technology. The requirement is to reach 10 cm/s, but the aimed goal is to obtain a precision level of a few cm/s. This would mean a large step forward over current radial-velocity spectrographs like ESO's HARPS. The HARPS instrument can attain a precision of 97 cm/s (3.5 km/h), with an effective precision of the order of 30 cm/s, making it one of only two spectrographs worldwide with such accuracy. The ESPRESSO would greatly exceed this capability making detection of Earth-size planets from ground-based instruments possible. Commissioning of ESPRESSO at the VLT started late 2017. The instrument is capable of operating in 1-UT mode (using one of the telescopes) and in 4-UT mode. In 4-UT mode, in which all the four 8-m telescopes are connected incoherently to form a 16-m equivalent telescope, the spectrograph detects extremely faint objects. For example, for G2V type stars: Rocky planets around stars as faint as V ≈ 9 (in 1-UT mode) Neptune mass planets around stars as faint as V ≈ 12 (in 4-UT mode ) Earth-size rocky planets around stars as faint as V ≈ 9 (CODEX on the E-ELT) The best-suited candidate stars for ESPRESSO are non-active, non-rotating, quiet G dwarfs to red dwarfs. It operates at the peak of its efficiency for a spectral type up to M4-type stars. Instrument For calibration, ESPRESSO uses a laser frequency comb (LFC), with backup of two ThAr lamps. It features three instrumental modes: singleHR, singleUHR and multiMR . In the singleHR mode ESPRESSO can be fed by any of the four UTs. Status All design work was completed and finalised by April 2013, with the manufacturing phase of the project commencing thereafter. ESPRESSO was tested on June 3, 2016. ESPRESSO first light occurred on September 25, 2016, during which they spotted various objects, among them the star 60 Sgr A. After being shipped to Chile, installed at the VLT, ESPRESSO saw its first light there on 27 November 2017, in 1-UT mode, observing the star Tau Ceti; the first star observed in the 4-UT mode was on February 3, 2018. ESPRESSO has been opened to the astronomical community in the 1-UT mode (one single telescope used), and is producing scientific data since October 24, 2018. On quiet stars it has already demonstrated radial-velocity precision of 25 cm/s over a full night. However, there have been some problems, for example, in light collecting efficiency which was around 30% lower than expected and required. And so, some fine-tuning, including replacing the parts causing the efficiency problem and subsequent re-testing, were to be done on the instrument before the full 4-UT mode was open to the scientific community in April 2019. A problem was discovered in the ESPRESSO charge-coupled device controllers, digital imaging hardware, where a differential nonlinearity issue has reduced the resolution obtainable more severely than was previously feared. The ESO detector team that determined the source of the problem is currently, working on a new version of the associated hardware in order to remedy this hopefully temporary setback. On August 29, 2019, the ESPRESSO ETC was updated to reflect the gain in transmission after the technical mission of July. This gain influx was, on average, ≈50% in the UHR and HR modes and ≈40% in the MR. As of April 6, 2020, the red radial velocity detector has, at least for a very short time, achieved the ≈10  cm/s precision, while the blue detector has so far only managed ≈60  cm/s. Due to the limited spectral coverage and lack of reliability, the Laser Frequency Comb (LFC) is currently not integrated into the telescope and for now complete wavelength calibration will have to rely on the two backup ThAr lamps, with resultant radial velocity measurements values limited by photon noise, stellar jitter and so less precise than expected. The ESPRESSO operator and detector teams are working to characterize and correct the problem, with a dedicated mission expected to take place during 2020. On May 24, 2020, a team led by A. Suárez Mascareño confirmed the existence of the exoplanet Proxima b, finding it to be about 1.17 times the mass of Earth - smaller than the older estimate of 1.3 times. They also suggested it is located in the habitable zone of its star, which it orbits in 11.2 days. ESPRESSO achieved an accuracy of 26 cm/s, about three times the accuracy obtained with HARPS. They also found a second signal in the data that could be of planetary origin with a semi-amplitude of only 40 cm/s and a 5.15 day period. On August 28, 2020, it was announced that in the coming weeks minimal science operations are planned to be resumed at the Paranal Observatory, following after a 5-month suspension due to the COVID-19 pandemic. As of June 11, 2021, there is still an ongoing issue with the blue cryostat detector caused by temperature instabilities, and there has been a communication problem between the Atmospheric Dispersion Corrector and the rest of the instrument, these issues are currently reducing the detection resolution achievable with the instrument. A major instrument intervention is scheduled between May 1 and May 16, 2022, and the instrument will be out of operations between May 1 until around May 23. After the intervention, an improvement in the overall instrument performance, and in the radial velocity stability, particularly in the blue detector, is expected. As a result of the instrument intervention the blue cryostat stability has dramatically improved. However, because of a change of the cross dispersion and dispersion direction positions (in both the x and y direction) from the red and blue cryostat detectors induced by the instrument intervention, combining data from different pixels to produce a focused image has become problematic in the MR4x2 mode and the new HR4x2 mode. This problem should be fixed in the new pipeline version, i.e. in an upcoming software update. Scientific objectives The main scientific objectives for ESPRESSO are: The measurement of high precision radial velocities of solar type stars for the search for rocky planets in the habitable zone of their star. The measurement of the variation of the physical constants The analysis of the chemical composition of stars in nearby galaxies. Consortium ESPRESSO is being developed by a consortium consisting on the European Southern Observatory (ESO) and seven scientific institutes: Centre for Astrophysics of the University of Porto (Portugal) Faculdade de Ciências da Universidade de Lisboa, CAAUL & LOLS (Portugal) Trieste Astronomical Observatory (Italy) Brera Astronomical Observatory (Italy) Instituto de Astrofísica de Canarias (Spain) Physics Institute of the University of Bern (Switzerland) University of Geneva (Switzerland) Institute of Astrophysics and Space Sciences (Portugal) The principal investigator is Francesco Pepe. ESPRESSO specifications Radial velocity comparison tables MK-type stars with planets in the habitable zone See also CORALIE spectrograph Doppler spectroscopy ELODIE spectrograph EXPRES spectrograph HIRES spectrograph List of extrasolar planets SOPHIE échelle spectrograph References External links ESPRESSO at eso.org ESPRESSO at unige.ch Astronomical instruments Telescope instruments Exoplanet search projects Spectrographs
ESPRESSO
[ "Physics", "Chemistry", "Astronomy" ]
1,908
[ "Exoplanet search projects", "Telescope instruments", "Spectrum (physical sciences)", "Spectrographs", "Astronomical instruments", "Astronomy projects", "Spectroscopy" ]
1,461,205
https://en.wikipedia.org/wiki/Argon%E2%80%93argon%20dating
Argon–argon (or 40Ar/39Ar) dating is a radiometric dating method invented to supersede potassiumargon (K/Ar) dating in accuracy. The older method required splitting samples into two for separate potassium and argon measurements, while the newer method requires only one rock fragment or mineral grain and uses a single measurement of argon isotopes. 40Ar/39Ar dating relies on neutron irradiation from a nuclear reactor to convert a stable form of potassium (39K) into the radioactive 39Ar. As long as a standard of known age is co-irradiated with unknown samples, it is possible to use a single measurement of argon isotopes to calculate the 40K/40Ar* ratio, and thus to calculate the age of the unknown sample. 40Ar* refers to the radiogenic 40Ar, i.e. the 40Ar produced from radioactive decay of 40K. 40Ar* does not include atmospheric argon adsorbed to the surface or inherited through diffusion and its calculated value is derived from measuring the 36Ar (which is assumed to be of atmospheric origin) and assuming that 40Ar is found in a constant ratio to 36Ar in atmospheric gases. Method The sample is generally crushed and single crystals of a mineral or fragments of rock are hand-selected for analysis. These are then irradiated to produce 39Ar from 39K via the (n-p) reaction 39K(n,p)39Ar. The sample is then degassed in a high-vacuum mass spectrometer via a laser or resistance furnace. Heating causes the crystal structure of the mineral (or minerals) to degrade, and, as the sample melts, trapped gases are released. The gas may include atmospheric gases, such as carbon dioxide, water, nitrogen, and radiogenic gases like argon and helium, generated from regular radioactive decay over geologic time. The abundance of 40Ar* increases with the age of the sample, though the rate of increase decays exponentially with the half-life of 40K, which is 1.248 billion years. Age equation The age of a sample is given by the age equation: where λ is the radioactive decay constant of 40K (approximately 5.5 x 10−10 year−1, corresponding to a half-life of approximately 1.25 billion years), J is the J-factor (parameter associated with the irradiation process), and R is the 40Ar*/39Ar ratio. The J factor relates to the fluence of the neutron bombardment during the irradiation process; a denser flow of neutron particles will convert more atoms of 39K to 39Ar than a less dense one. Relative dating only The 40Ar/39Ar method only measures relative dates. In order for an age to be calculated by the 40Ar/39Ar technique, the J parameter must be determined by irradiating the unknown sample along with a sample of known age for a standard. Because this (primary) standard ultimately cannot be determined by 40Ar/39Ar, it must be first determined by another dating method. The method most commonly used to date the primary standard is the conventional K/Ar technique. An alternative method of calibrating the used standard is astronomical tuning (also known as orbital tuning), which arrives at a slightly different age. Applications The primary use for 40Ar/39Ar geochronology is dating metamorphic and igneous minerals. 40Ar/39Ar is unlikely to provide the age of intrusions of granite as the age typically reflects the time when a mineral cooled through its closure temperature. However, in a metamorphic rock that has not exceeded its closure temperature the age likely dates the crystallization of the mineral. Dating of movement on fault systems is also possible with the 40Ar/39Ar method. Different minerals have different closure temperatures; biotite is ~300°C, muscovite is about 400°C and hornblende has a closure temperature of ~550°C. Thus, a granite containing all three minerals will record three different "ages" of emplacement as it cools down through these closure temperatures. Thus, although a crystallization age is not recorded, the information is still useful in constructing the thermal history of the rock. Dating minerals may provide age information on a rock, but assumptions must be made. Minerals usually only record the last time they cooled down below the closure temperature, and this may not represent all of the events which the rock has undergone, and may not match the age of intrusion. Thus, discretion and interpretation of age dating is essential. 40Ar/39Ar geochronology assumes that a rock retains all of its 40Ar after cooling past the closing temperature and that this was properly sampled during analysis. This technique allows the errors involved in K-Ar dating to be checked. Argon–argon dating has the advantage of not requiring determinations of potassium. Modern methods of analysis allow individual regions of crystals to be investigated. This method is important as it allows crystals forming and cooling during different events to be identified. Recalibration One problem with argon-argon dating has been a slight discrepancy with other methods of dating. Work by Kuiper et al. reports that a correction of 0.65% is needed. Thus the Cretaceous–Paleogene extinction (when the dinosaurs died out)—previously dated at 65.0 or 65.5 million years ago—is more accurately dated to 66.0-66.1 Ma. See also Grenville Turner, inventor of the technique Berkeley Geochronology Center References External links WiscAr Geochronology Laboratory, University of Wisconsin-Madison UC Berkeley press release: "Precise dating of the destruction of Pompeii proves argon-argon method can reliably date rocks as young as 2,000 years" New Mexico Geochronology Research Laboratory Argon Isotope Facility of the Scottish Universities Environmental Research Council Open University Ar/Ar and Noble Gas Laboratory Argon Laboratory / Australian National University Radiometric dating Argon
Argon–argon dating
[ "Chemistry" ]
1,236
[ "Radiometric dating", "Radioactivity" ]
1,461,217
https://en.wikipedia.org/wiki/Coombs%20test
The direct and indirect Coombs tests, also known as antiglobulin test (AGT), are blood tests used in immunohematology. The direct Coombs test detects antibodies that are stuck to the surface of the red blood cells. Since these antibodies sometimes destroy red blood cells they can cause anemia; this test can help clarify the condition. The indirect Coombs test detects antibodies that are floating freely in the blood. These antibodies could act against certain red blood cells; the test can be carried out to diagnose reactions to a blood transfusion. The direct Coombs test is used to test for autoimmune hemolytic anemia, a condition where the immune system breaks down red blood cells, leading to anemia. The direct Coombs test is used to detect antibodies or complement proteins attached to the surface of red blood cells. To perform the test, a blood sample is taken and the red blood cells are washed (removing the patient's plasma and unbound antibodies from the red blood cells) and then incubated with anti-human globulin ("Coombs reagent"). If the red cells then agglutinate, the test is positive, a visual indication that antibodies or complement proteins are bound to the surface of red blood cells and may be causing destruction of those cells. The indirect Coombs test is used in prenatal testing of pregnant women and in testing prior to a blood transfusion. The test detects antibodies against foreign red blood cells. In this case, serum is extracted from a blood sample taken from the patient. The serum is incubated with foreign red blood cells of known antigenicity. Finally, anti-human globulin is added. If agglutination occurs, the indirect Coombs test is positive. Mechanism The two Coombs tests are based on anti-human antibodies binding to human antibodies, commonly IgG or IgM. These anti-human antibodies are produced by plasma cells of non-human animals after immunizing them with human plasma. Additionally, these anti-human antibodies will also bind to human antibodies that may be fixed onto antigens on the surface of red blood cells (RBCs). In the appropriate test tube conditions, this can lead to agglutination of RBCs and allowing for visualisation of the resulting clumps of RBCs. If clumping is seen, the Coombs test is positive; if not, the Coombs test is negative. Common clinical uses of the Coombs test include the preparation of blood for transfusion in cross-matching, atypical antibodies in the blood plasma of pregnant women as part of antenatal care, and detection of antibodies for the diagnosis of immune-mediated hemolytic anemias. Coombs tests are performed using RBCs or serum (direct or indirect, respectively) from venous whole blood samples which are taken from patients by venipuncture. The venous blood is taken to a laboratory (or blood bank), where trained scientific technical staff do the Coombs tests. The clinical significance of the result is assessed by the physician who requested the Coombs test, perhaps with assistance from a laboratory-based hematologist. Direct Coombs test The direct Coombs test, also referred to as the direct antiglobulin test (DAT), is used to detect if antibodies or complement system factors have bound to RBCs surface antigens. The DAT is not required for pre-transfusion testing but may be carried out by some laboratories. Before transfusion, an indirect Coombs test is often done. Uses The direct Coombs test is used clinically when immune-mediated hemolytic anemia (antibody-mediated destruction of RBCs) is suspected. A positive Coombs test indicates that an immune mechanism is attacking the patient's RBCs. This mechanism could be autoimmunity, alloimmunity or a drug-induced immune-mediated mechanism. Examples of alloimmune hemolysis Hemolytic disease of the newborn (also known as HDN or erythroblastosis fetalis) Rh D hemolytic disease of the newborn (also known as Rh disease) ABO hemolytic disease of the newborn (the direct Coombs test may only be weakly positive) Anti-Kell hemolytic disease of the newborn Rh c hemolytic disease of the newborn Rh E hemolytic disease of the newborn Other blood group incompatibility (RhC, Rhe, Kidd, Duffy, Lewis, MN, P and others) Alloimmune hemolytic transfusion reactions Examples of autoimmune hemolysis/immunohemolytic hemolysis Warm antibody autoimmune hemolytic anemia Idiopathic Systemic lupus erythematosus Evans' syndrome (antiplatelet antibodies and hemolytic antibodies) Cold antibody immunohemolytic anemia Idiopathic cold hemagglutinin syndrome Waldenström's macroglobulinemia Infectious mononucleosis Paroxysmal cold hemoglobinuria (rare) Drug-induced immune-mediated hemolysis Methyldopa (IgG mediated type II hypersensitivity) Penicillin (high dose) Quinidine (IgM mediated activation of classical complement pathway and Membrane attack complex, MAC) (A memory device to remember that the DAT tests the RBCs and is used to test infants for haemolytic disease of the newborn is: Rh Disease; R = RBCs, D = DAT.) Laboratory The patient's RBCs are washed (removing the patient's own serum) and then centrifuged with antihuman globulin (also known as Coombs reagent). If immunoglobulin or complement factors have been fixed on to the RBC surface in-vitro, the antihuman globulin will agglutinate the RBCs and the direct Coombs test will be positive. (A visual representation of a positive direct Coombs test is shown in the upper half of the schematic). Indirect Coombs test The indirect Coombs test, also referred to as the indirect antiglobulin test (IAT), is used to detect in-vitro antibody-antigen reactions. It is used to detect very low concentrations of antibodies present in a patient's plasma/serum prior to a blood transfusion. In antenatal care, the IAT is used to screen pregnant women for antibodies that may cause hemolytic disease of the newborn. The IAT can also be used for compatibility testing, antibody identification, RBC phenotyping, and titration studies. Uses Blood transfusion preparation The indirect Coombs test is used to screen for antibodies in the preparation of blood for blood transfusion. The donor's and recipient's blood must be ABO and Rh D compatible. Donor blood for transfusion is also screened for infections in separate processes. Antibody screening A blood sample from the recipient and a blood sample from every unit of donor blood are screened for antibodies with the indirect Coombs test. Each sample is incubated against a wide range of RBCs that together exhibit a full range of surface antigens (i.e. blood types). Cross matching The indirect Coombs test is used to test a sample of the recipient's serum for antibodies against a sample of the blood donor's RBCs. This is sometimes called cross-matching blood. Antenatal antibody screening The indirect Coombs test is used to screen pregnant women for IgG antibodies that are likely to pass through the placenta into the fetal blood and cause haemolytic disease of the newborn. Laboratory method The IAT is a two-stage test. (A cross match is shown visually in the lower half of the schematic as an example of an indirect Coombs test). First stage Nonpatient, washed red blood cells (RBCs) with known antigens are incubated with patient serum containing unknown antibody content. If the serum contains antibodies to antigens on the RBC surface, the antibodies will bind to the surface of the RBCs. Second stage The RBCs are washed three or four times with isotonic saline solution and then incubated with antihuman globulin. If antibodies have bound to RBC surface antigens in the first stage, RBCs will agglutinate when incubated with the antihuman globulin (also known Coombs reagent) in this stage, and the indirect Coombs test will be positive. Titrations By diluting a serum containing antibodies the quantity of the antibody in the serum can be gauged. This is done by performing serial dilutions of the serum and finding the maximum dilution of test serum that is able to produce agglutination of relevant RBCs. Coombs reagent Coombs reagent (also known as Coombs antiglobulin or antihuman globulin) is used in both the direct Coombs test and the indirect Coombs test. Coombs reagent is antihuman globulin. It is made by injecting human globulin into animals, which produce polyclonal antibodies specific for human immunoglobulins and human complement system factors. More specific Coombs reagents or monoclonal antibodies can be used. Enhancement media Both IgM and IgG antibodies bind strongly with their complementary antigens. IgG antibodies are most reactive at 37°C. IgM antibodies are easily detected in saline at room temperature as IgM antibodies are able to bridge between RBC's owing to their large size, efficiently creating what is seen as agglutination. IgG antibodies are smaller and require assistance to bridge well enough to form a visual agglutination reaction. Reagents used to enhance IgG detection are referred to as potentiators. RBCs have a net negative charge called zeta potential which causes them to have a natural repulsion for one another. Potentiators reduce the zeta potential of RBC membranes. Common potentiators include low ionic strength solution (LISS), albumin, polyethylene glycol (PEG), and proteolytic enzymes. History The Coombs test was first described in 1945 by Cambridge immunologists Robin Coombs (after whom it is named), Arthur Mourant and Rob Race. Historically, it was done in test tubes. Today, it is commonly done using automated solid phase or gel technology. References External links Coombs' test- Medlineplus.org. Drugs that cause haemolytic anemia - Merck Manual. Transfusion medicine Immunologic tests Blood tests
Coombs test
[ "Chemistry", "Biology" ]
2,252
[ "Blood tests", "Chemical pathology", "Immunologic tests" ]
1,461,265
https://en.wikipedia.org/wiki/Split-quaternion
In abstract algebra, the split-quaternions or coquaternions form an algebraic structure introduced by James Cockle in 1849 under the latter name. They form an associative algebra of dimension four over the real numbers. After introduction in the 20th century of coordinate-free definitions of rings and algebras, it was proved that the algebra of split-quaternions is isomorphic to the ring of the real matrices. So the study of split-quaternions can be reduced to the study of real matrices, and this may explain why there are few mentions of split-quaternions in the mathematical literature of the 20th and 21st centuries. Definition The split-quaternions are the linear combinations (with real coefficients) of four basis elements that satisfy the following product rules: , , , . By associativity, these relations imply , , and also . So, the split-quaternions form a real vector space of dimension four with as a basis. They form also a noncommutative ring, by extending the above product rules by distributivity to all split-quaternions. Let consider the square matrices They satisfy the same multiplication table as the corresponding split-quaternions. As these matrices form a basis of the two-by-two matrices, the unique linear function that maps to (respectively) induces an algebra isomorphism from the split-quaternions to the two-by-two real matrices. The above multiplication rules imply that the eight elements form a group under this multiplication, which is isomorphic to the dihedral group D4, the symmetry group of a square. In fact, if one considers a square whose vertices are the points whose coordinates are or , the matrix is the clockwise rotation of the quarter of a turn, is the symmetry around the first diagonal, and is the symmetry around the axis. Properties Like the quaternions introduced by Hamilton in 1843, they form a four dimensional real associative algebra. But like the real algebra of 2×2 matrices – and unlike the real algebra of quaternions – the split-quaternions contain nontrivial zero divisors, nilpotent elements, and idempotents. (For example, is an idempotent zero-divisor, and is nilpotent.) As an algebra over the real numbers, the algebra of split-quaternions is isomorphic to the algebra of 2×2 real matrices by the above defined isomorphism. This isomorphism allows identifying each split-quaternion with a 2×2 matrix. So every property of split-quaternions corresponds to a similar property of matrices, which is often named differently. The conjugate of a split-quaternion , is . In term of matrices, the conjugate is the cofactor matrix obtained by exchanging the diagonal entries and changing the sign of the other two entries. The product of a split-quaternion with its conjugate is the isotropic quadratic form: which is called the norm of the split-quaternion or the determinant of the associated matrix. The real part of a split-quaternion is . It equals the trace of associated matrix. The norm of a product of two split-quaternions is the product of their norms. Equivalently, the determinant of a product of matrices is the product of their determinants. This property means that split-quaternions form a composition algebra. As there are nonzero split-quaternions having a zero norm, split-quaternions form a "split composition algebra" – hence their name. A split-quaternion with a nonzero norm has a multiplicative inverse, namely . In terms of matrices, this is equivalent to the Cramer rule that asserts that a matrix is invertible if and only its determinant is nonzero, and, in this case, the inverse of the matrix is the quotient of the cofactor matrix by the determinant. The isomorphism between split-quaternions and 2×2 real matrices shows that the multiplicative group of split-quaternions with a nonzero norm is isomorphic with and the group of split quaternions of norm is isomorphic with Geometrically, the split-quaternions can be compared to Hamilton's quaternions as pencils of planes. In both cases the real numbers form the axis of a pencil. In Hamilton quaternions there is a sphere of imaginary units, and any pair of antipodal imaginary units generates a complex plane with the real line. For split-quaternions there are hyperboloids of hyperbolic and imaginary units that generate split-complex or ordinary complex planes, as described below in § Stratification. Representation as complex matrices There is a representation of the split-quaternions as a unital associative subalgebra of the matrices with complex entries. This representation can be defined by the algebra homomorphism that maps a split-quaternion to the matrix Here, (italic) is the imaginary unit, not to be confused with the split quaternion basis element (upright roman). The image of this homomorphism is the matrix ring formed by the matrices of the form where the superscript denotes a complex conjugate. This homomorphism maps respectively the split-quaternions on the matrices The proof that this representation is an algebra homomorphism is straightforward but requires some boring computations, which can be avoided by starting from the expression of split-quaternions as real matrices, and using matrix similarity. Let be the matrix Then, applied to the representation of split-quaternions as real matrices, the above algebra homomorphism is the matrix similarity. It follows almost immediately that for a split quaternion represented as a complex matrix, the conjugate is the matrix of the cofactors, and the norm is the determinant. With the representation of split quaternions as complex matrices. the matrices of quaternions of norm are exactly the elements of the special unitary group SU(1,1). This is used for in hyperbolic geometry for describing hyperbolic motions of the Poincaré disk model. Generation from split-complex numbers Split-quaternions may be generated by modified Cayley–Dickson construction similar to the method of L. E. Dickson and Adrian Albert. for the division algebras C, H, and O. The multiplication rule is used when producing the doubled product in the real-split cases. The doubled conjugate so that If a and b are split-complex numbers and split-quaternion then Stratification In this section, the real subalgebras generated by a single split-quaternion are studied and classified. Let be a split-quaternion. Its real part is . Let be its nonreal part. One has , and therefore It follows that is a real number if and only is either a real number ( and ) or a purely nonreal split quaternion ( and ). The structure of the subalgebra generated by follows straightforwardly. One has and this is a commutative algebra. Its dimension is two except if is real (in this case, the subalgebra is simply ). The nonreal elements of whose square is real have the form with Three cases have to be considered, which are detailed in the next subsections. Nilpotent case With above notation, if (that is, if is nilpotent), then , that is, This implies that there exist and in such that and This is a parametrization of all split-quaternions whose nonreal part is nilpotent. This is also a parameterization of these subalgebras by the points of a circle: the split-quaternions of the form form a circle; a subalgebra generated by a nilpotent element contains exactly one point of the circle; and the circle does not contain any other point. The algebra generated by a nilpotent element is isomorphic to and to the plane of dual numbers. Imaginary units This is the case where . Letting one has It follows that belongs to the hyperboloid of two sheets of equation Therefore, there are real numbers such that and This is a parametrization of all split-quaternions whose nonreal part has a positive norm. This is also a parameterization of the corresponding subalgebras by the pairs of opposite points of a hyperboloid of two sheets: the split-quaternions of the form form a hyperboloid of two sheets; a subalgebra generated by a split-quaternion with a nonreal part of positive norm contains exactly two opposite points on this hyperboloid, one on each sheet; and the hyperboloid does not contain any other point. The algebra generated by a split-quaternion with a nonreal part of positive norm is isomorphic to and to the field of complex numbers. Hyperbolic units This is the case where . Letting one has It follows that belongs to the hyperboloid of one sheet of equation . Therefore, there are real numbers such that and This is a parametrization of all split-quaternions whose nonreal part has a negative norm. This is also a parameterization of the corresponding subalgebras by the pairs of opposite points of a hyperboloid of one sheet: the split-quaternions of the form form a hyperboloid of one sheet; a subalgebra generated by a split-quaternion with a nonreal part of negative norm contains exactly two opposite points on this hyperboloid; and the hyperboloid does not contain any other point. The algebra generated by a split-quaternion with a nonreal part of negative norm is isomorphic to and to the ring of split-complex numbers. It is also isomorphic (as an algebra) to by the mapping defined by Stratification by the norm As seen above, the purely nonreal split-quaternions of norm and form respectively a hyperboloid of one sheet, a hyperboloid of two sheets and a circular cone in the space of non real quaternions. These surfaces are pairwise asymptote and do not intersect. Their complement consist of six connected regions: the two regions located on the concave side of the hyperboloid of two sheets, where the two regions between the hyperboloid of two sheets and the cone, where the region between the cone and the hyperboloid of one sheet where the region outside the hyperboloid of one sheet, where This stratification can be refined by considering split-quaternions of a fixed norm: for every real number the purely nonreal split-quaternions of norm form an hyperboloid. All these hyperboloids are asymptote to the above cone, and none of these surfaces intersect any other. As the set of the purely nonreal split-quaternions is the disjoint union of these surfaces, this provides the desired stratification. Colour space Split quaternions have been applied to colour balance The model refers to the Jordan algebra of symmetric matrices representing the algebra. The model reconciles trichromacy with Hering's opponency and uses the Cayley–Klein model of hyperbolic geometry for chromatic distances. Historical notes The coquaternions were initially introduced (under that name) in 1849 by James Cockle in the London–Edinburgh–Dublin Philosophical Magazine. The introductory papers by Cockle were recalled in the 1904 Bibliography of the Quaternion Society. Alexander Macfarlane called the structure of split-quaternion vectors an exspherical system when he was speaking at the International Congress of Mathematicians in Paris in 1900. Macfarlane considered the "hyperboloidal counterpart to spherical analysis" in a 1910 article "Unification and Development of the Principles of the Algebra of Space" in the Bulletin of the Quaternion Society. Hans Beck compared split-quaternion transformations to the circle-permuting property of Möbius transformations in 1910. The split-quaternion structure has also been mentioned briefly in the Annals of Mathematics. Synonyms Para-quaternions (Ivanov and Zamkovoy 2005, Mohaupt 2006) Manifolds with para-quaternionic structures are studied in differential geometry and string theory. In the para-quaternionic literature, is replaced with . Exspherical system (Macfarlane 1900) Split-quaternions (Rosenfeld 1988) Antiquaternions (Rosenfeld 1988) Pseudoquaternions (Yaglom 1968 Rosenfeld 1988) See also Pauli matrices Split-biquaternions Split-octonions Dual quaternions References Further reading Brody, Dorje C., and Eva-Maria Graefe. "On complexified mechanics and coquaternions". Journal of Physics A: Mathematical and Theoretical 44.7 (2011): 072001. Ivanov, Stefan; Zamkovoy, Simeon (2005), "Parahermitian and paraquaternionic manifolds", Differential Geometry and its Applications 23, pp. 205–234, , . Mohaupt, Thomas (2006), "New developments in special geometry", . Özdemir, M. (2009) "The roots of a split quaternion", Applied Mathematics Letters 22:258–63. Özdemir, M. & A.A. Ergin (2006) "Rotations with timelike quaternions in Minkowski 3-space", Journal of Geometry and Physics 56: 322–36. Pogoruy, Anatoliy & Ramon M Rodrigues-Dagnino (2008) Some algebraic and analytical properties of coquaternion algebra, Advances in Applied Clifford Algebras. Composition algebras Quaternions Hyperbolic geometry Special relativity
Split-quaternion
[ "Physics" ]
2,941
[ "Special relativity", "Theory of relativity" ]
1,461,517
https://en.wikipedia.org/wiki/Fixed-point%20space
In mathematics, a Hausdorff space X is called a fixed-point space if it obeys a fixed-point theorem, according to which every continuous function has a fixed point, a point for which . For example, the closed unit interval is a fixed point space, as can be proved from the intermediate value theorem. The real line is not a fixed-point space, because the continuous function that adds one to its argument does not have a fixed point. Generalizing the unit interval, by the Brouwer fixed-point theorem, every compact bounded convex set in a Euclidean space is a fixed-point space. The definition of a fixed-point space can also be extended from continuous functions of topological spaces to other classes of maps on other types of space. References Fixed points (mathematics) Topology Topological spaces
Fixed-point space
[ "Physics", "Mathematics" ]
165
[ "Mathematical analysis", "Mathematical structures", "Mathematical analysis stubs", "Fixed points (mathematics)", "Space (mathematics)", "Topological spaces", "Topology", "Space", "Geometry", "Spacetime", "Dynamical systems" ]
1,464,363
https://en.wikipedia.org/wiki/Heat%20transfer%20coefficient
In thermodynamics, the heat transfer coefficient or film coefficient, or film effectiveness, is the proportionality constant between the heat flux and the thermodynamic driving force for the flow of heat (i.e., the temperature difference, ). It is used in calculating the heat transfer, typically by convection or phase transition between a fluid and a solid. The heat transfer coefficient has SI units in watts per square meter per kelvin (W/(m2K)). The overall heat transfer rate for combined modes is usually expressed in terms of an overall conductance or heat transfer coefficient, . In that case, the heat transfer rate is: where (in SI units): : Heat transfer rate (W) : Heat transfer coefficient (W/m²K) : surface area where the heat transfer takes place (m²) : temperature of the surrounding fluid (K) : temperature of the solid surface (K) The general definition of the heat transfer coefficient is: where: : heat flux (W/m²); i.e., thermal power per unit area, : difference in temperature between the solid surface and surrounding fluid area (K) The heat transfer coefficient is the reciprocal of thermal insulance. This is used for building materials (R-value) and for clothing insulation. There are numerous methods for calculating the heat transfer coefficient in different heat transfer modes, different fluids, flow regimes, and under different thermohydraulic conditions. Often it can be estimated by dividing the thermal conductivity of the convection fluid by a length scale. The heat transfer coefficient is often calculated from the Nusselt number (a dimensionless number). There are also online calculators available specifically for Heat-transfer fluid applications. Experimental assessment of the heat transfer coefficient poses some challenges especially when small fluxes are to be measured (e.g. ). Composition A simple method for determining an overall heat transfer coefficient that is useful to find the heat transfer between simple elements such as walls in buildings or across heat exchangers is shown below. This method only accounts for conduction within materials, it does not take into account heat transfer through methods such as radiation. The method is as follows: Where: = the overall heat transfer coefficient (W/(m2·K)) = the contact area for each fluid side (m2) (with and expressing either surface) = the thermal conductivity of the material (W/(m·K)) = the individual convection heat transfer coefficient for each fluid (W/(m2·K)) = the wall thickness (m). As the areas for each surface approach being equal the equation can be written as the transfer coefficient per unit area as shown below: or Often the value for is referred to as the difference of two radii where the inner and outer radii are used to define the thickness of a pipe carrying a fluid, however, this figure may also be considered as a wall thickness in a flat plate transfer mechanism or other common flat surfaces such as a wall in a building when the area difference between each edge of the transmission surface approaches zero. In the walls of buildings the above formula can be used to derive the formula commonly used to calculate the heat through building components. Architects and engineers call the resulting values either the U-Value or the R-Value of a construction assembly like a wall. Each type of value (R or U) are related as the inverse of each other such that R-Value = 1/U-Value and both are more fully understood through the concept of an overall heat transfer coefficient described in lower section of this document. Convective heat transfer correlations Although convective heat transfer can be derived analytically through dimensional analysis, exact analysis of the boundary layer, approximate integral analysis of the boundary layer and analogies between energy and momentum transfer, these analytic approaches may not offer practical solutions to all problems when there are no mathematical models applicable. Therefore, many correlations were developed by various authors to estimate the convective heat transfer coefficient in various cases including natural convection, forced convection for internal flow and forced convection for external flow. These empirical correlations are presented for their particular geometry and flow conditions. As the fluid properties are temperature dependent, they are evaluated at the film temperature , which is the average of the surface and the surrounding bulk temperature, . External flow, vertical plane Recommendations by Churchill and Chu provide the following correlation for natural convection adjacent to a vertical plane, both for laminar and turbulent flow. k is the thermal conductivity of the fluid, L is the characteristic length with respect to the direction of gravity, RaL is the Rayleigh number with respect to this length and Pr is the Prandtl number (the Rayleigh number can be written as the product of the Grashof number and the Prandtl number). For laminar flows, the following correlation is slightly more accurate. It is observed that a transition from a laminar to a turbulent boundary occurs when RaL exceeds around 109. External flow, vertical cylinders For cylinders with their axes vertical, the expressions for plane surfaces can be used provided the curvature effect is not too significant. This represents the limit where boundary layer thickness is small relative to cylinder diameter . For fluids with Pr ≤ 0.72, the correlations for vertical plane walls can be used when where is the Grashof number. And in fluids of Pr ≤ 6 when Under these circumstances, the error is limited to up to 5.5%. External flow, horizontal plates W. H. McAdams suggested the following correlations for horizontal plates. The induced buoyancy will be different depending upon whether the hot surface is facing up or down. For a hot surface facing up, or a cold surface facing down, for laminar flow: and for turbulent flow: For a hot surface facing down, or a cold surface facing up, for laminar flow: The characteristic length is the ratio of the plate surface area to perimeter. If the surface is inclined at an angle θ with the vertical then the equations for a vertical plate by Churchill and Chu may be used for θ up to 60°; if the boundary layer flow is laminar, the gravitational constant g is replaced with g cos θ when calculating the Ra term. External flow, horizontal cylinder For cylinders of sufficient length and negligible end effects, Churchill and Chu has the following correlation for . External flow, spheres For spheres, T. Yuge has the following correlation for Pr≃1 and . Vertical rectangular enclosure For heat flow between two opposing vertical plates of rectangular enclosures, Catton recommends the following two correlations for smaller aspect ratios. The correlations are valid for any value of Prandtl number. For : where H is the internal height of the enclosure and L is the horizontal distance between the two sides of different temperatures. For : For vertical enclosures with larger aspect ratios, the following two correlations can be used. For 10 < H/L < 40: For : For all four correlations, fluid properties are evaluated at the average temperature—as opposed to film temperature—, where and are the temperatures of the vertical surfaces and . Forced convection See main article Nusselt number and Churchill–Bernstein equation for forced convection over a horizontal cylinder. Internal flow, laminar flow Sieder and Tate give the following correlation to account for entrance effects in laminar flow in tubes where is the internal diameter, is the fluid viscosity at the bulk mean temperature, is the viscosity at the tube wall surface temperature. For fully developed laminar flow, the Nusselt number is constant and equal to 3.66. Mills combines the entrance effects and fully developed flow into one equation Internal flow, turbulent flow The Dittus-Bölter correlation (1930) is a common and particularly simple correlation useful for many applications. This correlation is applicable when forced convection is the only mode of heat transfer; i.e., there is no boiling, condensation, significant radiation, etc. The accuracy of this correlation is anticipated to be ±15%. For a fluid flowing in a straight circular pipe with a Reynolds number between 10,000 and 120,000 (in the turbulent pipe flow range), when the fluid's Prandtl number is between 0.7 and 120, for a location far from the pipe entrance (more than 10 pipe diameters; more than 50 diameters according to many authors) or other flow disturbances, and when the pipe surface is hydraulically smooth, the heat transfer coefficient between the bulk of the fluid and the pipe surface can be expressed explicitly as: where: is the hydraulic diameter is the thermal conductivity of the bulk fluid is the fluid viscosity is the mass flux is the isobaric heat capacity of the fluid is 0.4 for heating (wall hotter than the bulk fluid) and 0.33 for cooling (wall cooler than the bulk fluid). The fluid properties necessary for the application of this equation are evaluated at the bulk temperature thus avoiding iteration. Forced convection, external flow In analyzing the heat transfer associated with the flow past the exterior surface of a solid, the situation is complicated by phenomena such as boundary layer separation. Various authors have correlated charts and graphs for different geometries and flow conditions. For flow parallel to a plane surface, where is the distance from the edge and is the height of the boundary layer, a mean Nusselt number can be calculated using the Colburn analogy. Thom correlation There exist simple fluid-specific correlations for heat transfer coefficient in boiling. The Thom correlation is for the flow of boiling water (subcooled or saturated at pressures up to about 20 MPa) under conditions where the nucleate boiling contribution predominates over forced convection. This correlation is useful for rough estimation of expected temperature difference given the heat flux: where: is the wall temperature elevation above the saturation temperature, K q is the heat flux, MW/m2 P is the pressure of water, MPa This empirical correlation is specific to the units given. Heat transfer coefficient of pipe wall The resistance to the flow of heat by the material of pipe wall can be expressed as a "heat transfer coefficient of the pipe wall". However, one needs to select if the heat flux is based on the pipe inner or the outer diameter. If the heat flux is based on the inner diameter of the pipe, and if the pipe wall is thin compared to this diameter, the curvature of the wall has a negligible effect on heat transfer. In this case, the pipe wall can be approximated as a flat plane, which simplifies calculations. This assumption allows the heat transfer coefficient for the pipe wall to be calculated as: where is the effective thermal conductivity of the wall material is the difference between the outer and inner diameter. However, when the wall thickness is significant enough that curvature cannot be ignored, the heat transfer coefficient needs to account for the cylindrical shape. Under this condition, the heat transfer coefficient can be more accurately calculated using : where = inner diameter of the pipe [m] = outer diameter of the pipe [m] The thermal conductivity of the tube material usually depends on temperature; the mean thermal conductivity is often used. Combining convective heat transfer coefficients For two or more heat transfer processes acting in parallel, convective heat transfer coefficients simply add: For two or more heat transfer processes connected in series, convective heat transfer coefficients add inversely: For example, consider a pipe with a fluid flowing inside. The approximate rate of heat transfer between the bulk of the fluid inside the pipe and the pipe external surface is: where = heat transfer rate (W) = convective heat transfer coefficient (W/(m²·K)) = wall thickness (m) = wall thermal conductivity (W/m·K) = area (m²) = difference in temperature (K) Overall heat transfer coefficient The overall heat transfer coefficient is a measure of the overall ability of a series of conductive and convective barriers to transfer heat. It is commonly applied to the calculation of heat transfer in heat exchangers, but can be applied equally well to other problems. For the case of a heat exchanger, can be used to determine the total heat transfer between the two streams in the heat exchanger by the following relationship: where: = heat transfer rate (W) = overall heat transfer coefficient (W/(m2·K)) = heat transfer surface area (m2) = logarithmic mean temperature difference (K). The overall heat transfer coefficient takes into account the individual heat transfer coefficients of each stream and the resistance of the pipe material. It can be calculated as the reciprocal of the sum of a series of thermal resistances (but more complex relationships exist, for example when heat transfer takes place by different routes in parallel): where: R = Resistance(s) to heat flow in pipe wall (K/W) Other parameters are as above. The heat transfer coefficient is the heat transferred per unit area per kelvin. Thus area is included in the equation as it represents the area over which the transfer of heat takes place. The areas for each flow will be different as they represent the contact area for each fluid side. The thermal resistance due to the pipe wall (for thin walls) is calculated by the following relationship: where = the wall thickness (m) = the thermal conductivity of the material (W/(m·K)) This represents the heat transfer by conduction in the pipe. The thermal conductivity is a characteristic of the particular material. Values of thermal conductivities for various materials are listed in the list of thermal conductivities. As mentioned earlier in the article the convection heat transfer coefficient for each stream depends on the type of fluid, flow properties and temperature properties. Some typical heat transfer coefficients include: Air - h = 10 to 100 W/(m2K) Water - h = 500 to 10,000 W/(m2K). Thermal resistance due to fouling deposits Often during their use, heat exchangers collect a layer of fouling on the surface which, in addition to potentially contaminating a stream, reduces the effectiveness of heat exchangers. In a fouled heat exchanger the buildup on the walls creates an additional layer of materials that heat must flow through. Due to this new layer, there is additional resistance within the heat exchanger and thus the overall heat transfer coefficient of the exchanger is reduced. The following relationship is used to solve for the heat transfer resistance with the additional fouling resistance: = where = overall heat transfer coefficient for a fouled heat exchanger, = perimeter of the heat exchanger, may be either the hot or cold side perimeter however, it must be the same perimeter on both sides of the equation, = overall heat transfer coefficient for an unfouled heat exchanger, = fouling resistance on the cold side of the heat exchanger, = fouling resistance on the hot side of the heat exchanger, = perimeter of the cold side of the heat exchanger, = perimeter of the hot side of the heat exchanger, This equation uses the overall heat transfer coefficient of an unfouled heat exchanger and the fouling resistance to calculate the overall heat transfer coefficient of a fouled heat exchanger. The equation takes into account that the perimeter of the heat exchanger is different on the hot and cold sides. The perimeter used for the does not matter as long as it is the same. The overall heat transfer coefficients will adjust to take into account that a different perimeter was used as the product will remain the same. The fouling resistances can be calculated for a specific heat exchanger if the average thickness and thermal conductivity of the fouling are known. The product of the average thickness and thermal conductivity will result in the fouling resistance on a specific side of the heat exchanger. = where: = average thickness of the fouling in a heat exchanger, = thermal conductivity of the fouling, . See also Convective heat transfer Heat sink Convection Churchill–Bernstein equation Heat Heat pump Heisler Chart Thermal conductivity Thermal-hydraulics Biot number Fourier number Nusselt number References External links Overall Heat Transfer Coefficients Overall Heat Transfer Coefficients Table and Equation Correlations for Convective Heat Transfer ThermoTurb – A calculator for heat transfer coefficients Convection Heat transfer Heat conduction
Heat transfer coefficient
[ "Physics", "Chemistry" ]
3,324
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Convection", "Thermodynamics", "Heat conduction" ]
1,464,422
https://en.wikipedia.org/wiki/Hanbury%20Brown%20and%20Twiss%20effect
In physics, the Hanbury Brown and Twiss (HBT) effect is any of a variety of correlation and anti-correlation effects in the intensities received by two detectors from a beam of particles. HBT effects can generally be attributed to the wave–particle duality of the beam, and the results of a given experiment depend on whether the beam is composed of fermions or bosons. Devices which use the effect are commonly called intensity interferometers and were originally used in astronomy, although they are also heavily used in the field of quantum optics. History In 1954, Robert Hanbury Brown and Richard Q. Twiss introduced the intensity interferometer concept to radio astronomy for measuring the tiny angular size of stars, suggesting that it might work with visible light as well. Soon after they successfully tested that suggestion: in 1956 they published an in-lab experimental mockup using blue light from a mercury-vapor lamp, and later in the same year, they applied this technique to measuring the size of Sirius. In the latter experiment, two photomultiplier tubes, separated by a few meters, were aimed at the star using crude telescopes, and a correlation was observed between the two fluctuating intensities. Just as in the radio studies, the correlation dropped away as they increased the separation (though over meters, instead of kilometers), and they used this information to determine the apparent angular size of Sirius. This result was met with much skepticism in the physics community. The radio astronomy result was justified by Maxwell's equations, but there were concerns that the effect should break down at optical wavelengths, since the light would be quantised into a relatively small number of photons that induce discrete photoelectrons in the detectors. Many physicists worried that the correlation was inconsistent with the laws of thermodynamics. Some even claimed that the effect violated the uncertainty principle. Hanbury Brown and Twiss resolved the dispute in a neat series of articles (see References below) that demonstrated, first, that wave transmission in quantum optics had exactly the same mathematical form as Maxwell's equations, albeit with an additional noise term due to quantisation at the detector, and second, that according to Maxwell's equations, intensity interferometry should work. Others, such as Edward Mills Purcell immediately supported the technique, pointing out that the clumping of bosons was simply a manifestation of an effect already known in statistical mechanics. After a number of experiments, the whole physics community agreed that the observed effect was real. The original experiment used the fact that two bosons tend to arrive at two separate detectors at the same time. Morgan and Mandel used a thermal photon source to create a dim beam of photons and observed the tendency of the photons to arrive at the same time on a single detector. Both of these effects used the wave nature of light to create a correlation in arrival time – if a single photon beam is split into two beams, then the particle nature of light requires that each photon is only observed at a single detector, and so an anti-correlation was observed in 1977 by H. Jeff Kimble. Finally, bosons have a tendency to clump together, giving rise to Bose–Einstein correlations, while fermions due to the Pauli exclusion principle, tend to spread apart, leading to Fermi–Dirac (anti)correlations. Bose–Einstein correlations have been observed between pions, kaons and photons, and Fermi–Dirac (anti)correlations between protons, neutrons and electrons. For a general introduction in this field, see the textbook on Bose–Einstein correlations by Richard M. Weiner. A difference in repulsion of Bose–Einstein condensate in the "trap-and-free fall" analogy of the HBT effect affects comparison. Also, in the field of particle physics, Gerson Goldhaber et al. performed an experiment in 1959 in Berkeley and found an unexpected angular correlation among identical pions, discovering the ρ0 resonance, by means of decay. From then on, the HBT technique started to be used by the heavy-ion community to determine the space–time dimensions of the particle emission source for heavy-ion collisions. For developments in this field up to 2005, see for example this review article. Wave mechanics The HBT effect can, in fact, be predicted solely by treating the incident electromagnetic radiation as a classical wave. Suppose we have a monochromatic wave with frequency on two detectors, with an amplitude that varies on timescales slower than the wave period . (Such a wave might be produced from a very distant point source with a fluctuating intensity.) Since the detectors are separated, say the second detector gets the signal delayed by a time , or equivalently, a phase ; that is, The intensity recorded by each detector is the square of the wave amplitude, averaged over a timescale that is long compared to the wave period but short compared to the fluctuations in : where the overline indicates this time averaging. For wave frequencies above a few terahertz (wave periods less than a picosecond), such a time averaging is unavoidable, since detectors such as photodiodes and photomultiplier tubes cannot produce photocurrents that vary on such short timescales. The correlation function of these time-averaged intensities can then be computed: Most modern schemes actually measure the correlation in intensity fluctuations at the two detectors, but it is not too difficult to see that if the intensities are correlated, then the fluctuations , where is the average intensity, ought to be correlated, since In the particular case that consists mainly of a steady field with a small sinusoidally varying component , the time-averaged intensities are with , and indicates terms proportional to , which are small and may be ignored. The correlation function of these two intensities is then showing a sinusoidal dependence on the delay between the two detectors. Quantum interpretation The above discussion makes it clear that the Hanbury Brown and Twiss (or photon bunching) effect can be entirely described by classical optics. The quantum description of the effect is less intuitive: if one supposes that a thermal or chaotic light source such as a star randomly emits photons, then it is not obvious how the photons "know" that they should arrive at a detector in a correlated (bunched) way. A simple argument suggested by Ugo Fano in 1961 captures the essence of the quantum explanation. Consider two points and in a source that emit photons detected by two detectors and as in the diagram. A joint detection takes place when the photon emitted by is detected by and the photon emitted by is detected by (red arrows) or when 's photon is detected by and 's by (green arrows). The quantum mechanical probability amplitudes for these two possibilities are denoted by and respectively. If the photons are indistinguishable, the two amplitudes interfere constructively to give a joint detection probability greater than that for two independent events. The sum over all possible pairs in the source washes out the interference unless the distance is sufficiently small. Fano's explanation nicely illustrates the necessity of considering two-particle amplitudes, which are not as intuitive as the more familiar single-particle amplitudes used to interpret most interference effects. This may help to explain why some physicists in the 1950s had difficulty accepting the Hanbury Brown and Twiss result. But the quantum approach is more than just a fancy way to reproduce the classical result: if the photons are replaced by identical fermions such as electrons, the antisymmetry of wave functions under exchange of particles renders the interference destructive, leading to zero joint detection probability for small detector separations. This effect is referred to as antibunching of fermions. The above treatment also explains photon antibunching: if the source consists of a single atom, which can only emit one photon at a time, simultaneous detection in two closely spaced detectors is clearly impossible. Antibunching, whether of bosons or of fermions, has no classical wave analog. From the point of view of the field of quantum optics, the HBT effect was important to lead physicists (among them Roy J. Glauber and Leonard Mandel) to apply quantum electrodynamics to new situations, many of which had never been experimentally studied, and in which classical and quantum predictions differ. See also Bose–Einstein correlations Degree of coherence Timeline of electromagnetism and classical optics Footnotes References – paper which (incorrectly) disputed the existence of the Hanbury Brown and Twiss effect – experimental demonstration of the effect download as PDF download as PDF – the cavity-QED equivalent for Kimble & Mandel's free-space demonstration of photon antibunching in resonance fluorescence External links http://adsabs.harvard.edu//full/seri/JApA./0015//0000015.000.html http://physicsweb.org/articles/world/15/10/6/1 https://web.archive.org/web/20070609114114/http://www.du.edu/~jcalvert/astro/starsiz.htm http://www.2physics.com/2010/11/hanbury-brown-and-twiss-interferometry.html Hanbury-Brown-Twiss Experiment (Becker & Hickl GmbH, web page) Quantum optics
Hanbury Brown and Twiss effect
[ "Physics" ]
1,977
[ "Quantum optics", "Quantum mechanics" ]
1,464,555
https://en.wikipedia.org/wiki/Air%20shower%20%28physics%29
Air showers are extensive cascades of subatomic particles and ionized nuclei, produced in the atmosphere when a primary cosmic ray enters the atmosphere. Particles of cosmic radiation can be protons, nuclei, electrons, photons, or (rarely) positrons. Upon entering the atmosphere, they interact with molecules and initiate a particle cascade that lasts for several generations, until the energy of the primary particle is fully converted. If the primary particle is a hadron, mostly light mesons like pions and kaons are produced in the first interactions, which then fuel a hadronic shower component that produces shower particles mostly through pion decay. Primary photons and electrons, on the other hand, produce mainly electromagnetic showers. Depending on the energy of the primary particle, the detectable size of the shower can reach several kilometers in diameter. The air shower phenomenon was unknowingly discovered by Bruno Rossi in 1933 in a laboratory experiment. In 1937 Pierre Auger, unaware of Rossi's earlier report, detected the same phenomenon and investigated it in some detail. He concluded that cosmic-ray particles are of extremely high energies and interact with nuclei high up in the atmosphere, initiating a cascade of secondary interactions that produce extensive showers of subatomic particles. The most important experiments detecting extensive air showers today are HAWC, LHAASO, the Telescope Array Project and the Pierre Auger Observatory. The latter is the largest observatory for cosmic rays ever built, operating with 4 fluorescence detector buildings and 1600 surface detector stations spanning an area of 3,000 km2 in the Argentinean desert. History In 1933, shortly after the discovery of cosmic radiation by Victor Hess, Bruno Rossi conducted an experiment in the Institute of Physics in Florence, using shielded Geiger counters to confirm the penetrating character of the cosmic radiation. He used different arrangements of Geiger counters, including a setup of three counters, where two were placed next to each other and a third was centered underneath with additional shielding. From the detection of air-shower particles passing through the Geiger counters in coincidence, he assumed that secondary particles are being produced by cosmic rays in the first shielding layer as well as in the rooftop of the laboratory, unknowing that the particles he measured were muons, which are produced in air showers and which would only be discovered three years later. He also noted that the coincidence rate drops significantly for cosmic rays that are detected at a zenith angle below . A similar experiment was conducted in 1936 by Hilgert and Bothe in Heidelberg. In a publication in 1939, Pierre Auger, together with three colleagues, suggested that secondary particles are created by cosmic rays in the atmosphere, and conducted experiments using shielded scintillators and Wilson chambers on the Jungfraujoch at an altitude of above sea level, and on Pic du Midi at an altitude of above sea level, and at sea level. They found that the rate of coincidences reduces with increasing distance of the detectors, but does not vanish, even at high altitudes. Thus confirming that cosmic rays produce air showers of secondary particles in the atmosphere. They estimated that the primary particles of this phenomenon must have energies of up to . Based on the idea of quantum theory, theoretical work on air showers was carried between 1935 and 1940 out by many well-known physicists of the time (including Bhabha, Oppenheimer, Landau, Rossi and others), assuming that in the vicinity of nuclear fields high-energy gamma rays will undergo pair-production of electrons and positrons, and electrons and positrons will produce gamma rays by radiation. Work on extensive air showers continued mainly after the war, as many key figures were involved in the Manhattan project. In the 1950s, the lateral and angular structure of electromagnetic particles in air showers were calculated by Japanese scientists Koichi Kamata and Jun Nishimura. In 1955, the first surface detector array to detect air showers with sufficient precision to detect the arrival direction of the primary cosmic rays was built at the Agassiz station at MIT. The Agassiz array consisted of 16 plastic scintillators arranged in a diameter circular array. The results of the experiment on the arrival directions of cosmic rays, however, where inconclusive. The Volcano Ranch experiment, which was built in 1959 and operated by John Linsley, was the first surface detector array of sufficient size to detect ultrahigh-energy cosmic rays. In 1962, the first cosmic ray with an energy of was reported. With a footprint of several kilometers, the shower size at the ground was twice as large as any event recorded before, approximately producing particles in the shower. Furthermore, it was confirmed that the lateral distribution of the particles detected at the ground matched Kenneth Greisen's approximation of the structure functions derived by Kamata and Nishimura. A novel detection technique for extensive air showers was proposed by Greisen in 1965. He suggested to directly observe Cherenkov radiation of the shower particles, and fluorescence light produced by excited nitrogen molecules in the atmosphere. In this way, one would be able to measure the longitudinal development of a shower in the atmosphere. This method was first applied successfully and reported in 1977 at Volcano Ranch, using 67 optical modules. Volcano Ranch finished its operation shortly after due to lack of funding. Many air-shower experiments followed in the decades after, including KASCADE, AGASA, and HIRES. In 1995, the latter reported the detection of an ultrahigh-energy cosmic ray with an energy beyond the theoretically expected spectral cutoff. The air shower of the cosmic ray was detected by the Fly's Eye fluorescence detector system and was estimated to contain approximately 240 billion particles at its maximum. This corresponds to a primary energy for the cosmic ray of about . To this day, no single particle with a larger energy was recorded. It is therefore publicly referred to as the Oh-My-God particle. Air shower formation The air shower is formed by interaction of the primary cosmic ray with the atmosphere, and then by subsequent interaction of the secondary particles, and so on. Depending on the type of the primary particle, the shower particles will be created mostly by hadronic or electromagnetic interactions. Simplified shower model Shortly after entering the atmosphere, the primary cosmic ray (which is assumed to be a proton or nucleus in the following) is scattered by a nucleus in the atmosphere and creates a shower core - a region of high-energy hadrons that develops along the extended trajectory of the primary cosmic ray, until it is fully absorbed by either the atmosphere or the ground. The interaction and decay of particles in the shower core feeds the main particle components of the shower, which are hadrons, muons, and purely electromagnetic particles. The hadronic part of the shower consists mostly of pions, and some heavier mesons, such as kaons and mesons. Neutral pions, , decay by the electroweak interaction into pairs of oppositely spinning photons, which fuel the electromagnetic component of the shower. Charged pions, , preferentially decay into muons and (anti)neutrinos via the weak interaction. The same holds true for charged and neutral kaons. In addition, kaons also produce pions. Neutrinos from pion and kaon decay are usually not accounted for as parts of the shower because of their very low cross-section, and are referred to as part of the invisible energy of the shower. Qualitatively, the particle content of a shower can be described by a simplified model, in which all particles partaking in any interaction of the shower will equally share the available energy. One can assume that in each hadronic interaction, charged pions and neutral pions are produced. The neutral pions will decay into photons, which fuel the electromagnetic part of the shower. The charged pions will then continue to interact hadronically. After interactions, the share of the primary energy deposited in the hadronic component is given by , and the electromagnetic part thus approximately carries . A pion in the th generation thus carries an energy of . The reaction continues, until the pions reach a critical energy , at which they decay into muons. Thus, a total of interactions are expected and a total of muons are produced, with . The electromagnetic part of the cascade develops in parallel by bremsstrahlung and pair production. For the sake of simplicity, photons, electrons, and positrons are often treated as equivalent particles in the shower. The electromagnetic cascade continues, until the particles reach a critical energy of , from which on they start losing most of their energy due to scattering with molecules in the atmosphere. Because , the electromagnetic particles dominate the number of particles in the shower by far. A good approximation for the number of (electromagnetic) particles produced in a shower is . Assuming each electromagnetic interaction occurs after the average radiation length , the shower will reach its maximum at a depth of approximately , where is assumed to be the depth of the first interaction of the cosmic ray in the atmosphere. This approximation is, however, not accurate for all types of primary particles. Especially showers from heavy nuclei will reach their maximum much earlier. Longitudinal profile The number of particles present in an air shower is approximately proportional to the calorimetric energy deposit of the shower. The energy deposit as a function of the surpassed atmospheric matter, as it can for example be seen by fluorescence detector telescopes, is known as the longitudinal profile of the shower. For the longitudinal profile of the shower, only the electromagnetic particles (electrons, positrons, and photons) are relevant, as they dominate the particle content and the contribution to the calorimetric energy deposit. The shower profile is characterized by a fast rise in the number of particles, before the average energy of the particles falls below around the shower maximum, and a slow decay afterwards. Mathematically the profile can be well described by a slanted Gaussian, the Gaisser-Hillas function or the generalized Greisen function, Here and using the electromagnetic radiation length in air, . marks the point of the first interaction, and is a dimensionless constant. The shower age parameter is introduced to compare showers with different starting depths and different primary energies to highlight their universal features, as for example at the shower maximum . For a shower with a first interaction at , the shower age is usually defined as . The image shows the ideal longitudinal profile of showers using different primary energies, as a function of the surpassed atmospheric depth or, equivalently, the number of radiation lengths . The longitudinal profiles of showers are particularly interesting in the context of measuring the total calorimetric energy deposit and the depth of the shower maximum, , since the latter is an observable that is sensitive to type of the primary particle. The shower appears brightest in a fluorescence telescope at its maximum. Lateral profile For idealized electromagnetic showers, the angular and lateral distribution functions for electromagnetic particles have been derived by Japanese physicists Nishimura and Kamata. For a shower of age , the density of electromagnetic particles as a function of the distance to the shower axis can be approximated by the NKG function using the number of particles , Molière radius and the common Gamma function. can be given for example by the longitudinal profile function. The lateral distribution of hadronic showers (i.e. initiated by a primary hadron, such as a proton), which contain a significantly increased amount of muons, can be well approximated by a superposition of NKG-like functions, in which different particle components are described using effective values for and . Detection The original particle arrives with high energy and hence a velocity near the speed of light, so the products of the collisions tend also to move generally in the same direction as the primary, while to some extent spreading sidewise. In addition, the secondary particles produce a widespread flash of light in forward direction due to the Cherenkov effect, as well as fluorescence light that is emitted isotropically from the excitation of nitrogen molecules. The particle cascade and the light produced in the atmosphere can be detected with surface detector arrays and optical telescopes. Surface detectors typically use Cherenkov detectors or scintillation counters to detect the charged secondary particles at ground level. The telescopes used to measure the fluorescence and Cherenkov light use large mirrors to focus the light on PMT clusters. Finally, air showers emit radio waves due to the deflection of electrons and positrons by the geomagnetic field. As advantage over the optical techniques, radio detection is possible around the clock and not only during dark and clear nights. Thus, several modern experiments, e.g., TAIGA, LOFAR, or the Pierre Auger Observatory use radio antennas in addition to particle detectors and optical techniques. See also Cosmic-ray observatory Particle shower References External links Extensive Air Showers. Buckland Park Air Shower Detector Haverah Park Detection System HiRes Detector System Pierre Auger Observatory HiSPARC (High School Project on Astrophysics Research with Cosmics) AIRES (AIRshower Extended Simulations) : Large and well documented Fortran package for simulating cosmic ray showers by Sergio Sciutto at the Department of Physics of the Universidad Nacional de La Plata, Argentina CORSIKA, CORSIKA: Another code for simulating cosmic ray air showers by Dieter Heck of the Forschungszentrum Karlsruhe, Germany COSMUS : Interactive animated 3d models of several different cosmic ray air showers, and instructions on how to make your own using AIRES simulations. From the COSMUS group at the University of Chicago. Milagro Animations : Movies and instructions for how to make them, showing how air showers interact with the Milagro detector. By Miguel Morales. CASSIM Animations : Animations of different cosmic ray air showers by Hajo Dreschler of New York University. SPASE2 Experiment : South-Pole Air Shower Experiment (SPASE). GAMMA Experiment : High mountain Air Shower Experiment. Atmosphere Earth phenomena Cosmic rays
Air shower (physics)
[ "Physics" ]
2,830
[ "Physical phenomena", "Earth phenomena", "Astrophysics", "Radiation", "Cosmic rays" ]
30,719,029
https://en.wikipedia.org/wiki/Sweet%20spot%20%28acoustics%29
The sweet spot is a term used by audiophiles and recording engineers to describe the focal point between two speakers, where an individual is fully capable of hearing the stereo audio mix the way it was intended to be heard by the mixer. The sweet spot is the location which creates an equilateral triangle together with the stereo loudspeakers, the stereo triangle. In the case of surround sound, this is the focal point between four or more speakers, i.e., the location at which all wave fronts arrive simultaneously. In international recommendations the sweet spot is referred to as reference listening point. Different static methods exist to broaden the area of the sweet spot. A discussion of methods and their benefits can be found in Merchel et al. By means of such methods more than one listener can enjoy the sound experience as intended by the audio engineer, including the desired phantom source locations, spectral and spatial balance and degree of immersion. Alternatively, the sweet spot can be adjusted dynamically to the actual position of the listener. Therefore, a correct phantom source localization is possible over the whole listening area. This approach is implemented in the open source project SweetSpotter. Massive multi-channel audio systems that apply wave field synthesis or higher order ambisonics exhibit an extended optimal listening area instead of a sweet spot. Sound engineers also refer to the sweet spot of any noise-producing body that may be captured with a microphone. Every individual instrument has its own sweet spot, the perfect location to place the microphone or microphones, in order to obtain the best sound. References Further reading Acoustics
Sweet spot (acoustics)
[ "Physics" ]
322
[ "Classical mechanics", "Acoustics" ]
30,720,730
https://en.wikipedia.org/wiki/MAX%20phases
The MAX phases are layered, hexagonal carbides and nitrides which have the general formula: Mn+1AXn, (MAX) where n = 1 to 4, and M is an early transition metal, A is an A-group (mostly IIIA and IVA, or groups 13 and 14) element and X is either carbon and/or nitrogen. The layered structure consists of edge-sharing, distorted XM6 octahedra interleaved by single planar layers of the A-group element. History In the 1960s, H. Nowotny and co-workers discovered a large family of ternary, layered carbides and nitrides, which they called the 'H' phases, now known as the '211' MAX phases (i.e. n = 1), and several '312' MAX phases. Subsequent work extended to '312' phases such as Ti3SiC2 and showed it to have unusual mechanical properties. In 1996, Barsoum and El-Raghy synthesized for the first time fully dense and phase pure Ti3SiC2 and revealed, by characterization, that it possesses a distinct combination of some of the best properties of metals and engineering ceramics. In 1999 they also synthesized Ti4AlN3 (i.e. a '413' MAX phase) and realized that they were dealing with a much larger family of solids that all behaved similarly. In 2020, Mo4VAlC4 (i.e. a '514' MAX phase) was published, the first major expansion of the definition of the family in over twenty years. Since 1996, when the first "modern" paper was published on the subject, tremendous progress has been made in understanding the properties of these phases. Since 2006 research has focused on the fabrication, characterization and implementation of composites including MAX phase materials. Such systems, including aluminium-MAX phase composites, have the ability to further improve ductility and toughness over pure MAX phase material. Synthesis The synthesis of ternary MAX phase compounds and composites has been realized by different methods, including combustion synthesis, chemical vapor deposition, physical vapor deposition at different temperatures and flux rates, arc melting, hot isostatic pressing, self-propagating high-temperature synthesis (SHS), reactive sintering, spark plasma sintering, mechanical alloying and reaction in molten salt. An element replacement method in molten salts is developed to obtain series of Mn+1ZnXn and Mn+1CuXn MAX phases. Properties These carbides and nitrides possess an unusual combination of chemical, physical, electrical, and mechanical properties, exhibiting both metallic and ceramic characteristics under various conditions. These include high electrical and thermal conductivity, thermal shock resistance, damage tolerance, machinability, high elastic stiffness, and low thermal expansion coefficients. Some MAX phases are also highly resistant to chemical attack (e.g. Ti3SiC2) and high-temperature oxidation in air (Ti2AlC, Cr2AlC, and Ti3AlC2). They are useful in technologies involving high efficiency engines, damage tolerant thermal systems, increasing fatigue resistance, and retention of rigidity at high temperatures. These properties can be related to the electronic structure and chemical bonding in the MAX phases. It can be described as periodic alteration of high and low electron density regions. This allows for design of other nanolaminates based on the electronic structure similarities, such as Mo2BC and PdFe3N. Electrical The MAX phases are electrically and thermally conductive due to the metallic-like nature of their bonding. Most of the MAX phases are better electric and thermal conductors than Ti. This is also related to the electronic structure. Physical While MAX phases are stiff, they can be machined as easily as some metals. They can all be machined manually using a hacksaw, despite the fact that some of them are three times as stiff as titanium metal, with the same density as titanium. They can also be polished to a metallic luster because of their excellent electrical conductivity. They are not susceptible to thermal shock and are exceptionally damage tolerant. Some, such as Ti2AlC and Cr2AlC, are oxidation and corrosion resistant. Polycrystalline Ti3SiC2 has zero thermopower, a feature which is correlated to their anisotropic electronic structure. Mechanical The MAX phases as a class are generally stiff, lightweight, and plastic at high temperatures. Due to the layered atomic structure of these compounds, some, like Ti3SiC2 and Ti2AlC, are also creep and fatigue resistant, and maintain their strengths to high temperatures. They exhibit unique deformation characterized by basal slip (evidences of out-of-basal plane a-dislocations and dislocation cross-slips were recently reported in MAX phase deformed at high temperature and Frank partial c-dislocations induced by Cu-matrix diffusion were also reported), a combination of kink and shear band deformation, and delaminations of individual grains. During mechanical testing, it has been found that polycrystalline Ti3SiC2 cylinders can be repeatedly compressed at room temperature, up to stresses of 1 GPa, and fully recover upon the removal of the load while dissipating 25% of the energy. It was by characterizing these unique mechanical properties of the MAX phases that kinking non-linear solids were discovered. The micromechanism supposed to be responsible for these properties is the incipient kink band (IKB). However no direct evidence of these IKBs has been yet obtained, thus leaving the door open to other mechanisms that are less assumption-hungry. Indeed, a recent study demonstrates that the reversible hysteretic loops when cycling MAX polycrystals can be as well explained by the complex response of the very anisotropic lamellar microstructure. Potential applications Tough, machinable, thermal shock-resistant refractories High-temperature heating elements Coatings for electrical contacts Neutron irradiation resistant parts for nuclear applications Precursor for the synthesis of carbide-derived carbon Precursor for the synthesis of MXenes, a family of two-dimensional transition metal carbides, nitrides, and carbonitrides References Carbides Nitrides Ceramic materials
MAX phases
[ "Engineering" ]
1,289
[ "Ceramic engineering", "Ceramic materials" ]
30,721,542
https://en.wikipedia.org/wiki/Nanofountain%20probe
A nanofountain probe (NFP) is a device for 'drawing' micropatterns of liquid chemicals at extremely small resolution. An NFP contains a cantilevered micro-fluidic device terminated in a nanofountain. The embedded microfluidics facilitates rapid and continuous delivery of molecules from the on-chip reservoirs to the fountain tip. When the tip is brought into contact with the substrate, a liquid meniscus forms, providing a path for molecular transport to the substrate. By controlling the geometry of the meniscus through hold time and deposition speed, various inks and biomolecules could be patterned on a surface, with sub 100 nm resolution. Historical background The advent of dip-pen nanolithography (DPN) in recent years represented a revolution in nanoscale patterning technology. With sub-100-nanometer resolution and an architecture conducive to massive parallelization, DPN is capable of producing large arrays of nanoscale features. As such, conventional DPN and other probe-based techniques are generally limited in their rate of deposition and by the need for repeated re-inking during extended patterning. To address these challenges, nanofountain probe was developed by Espinosa et al. where microchannels were embedded in AFM probes to transport ink or bio-molecules from reservoirs to substrates, realizing continuous writing at the nanoscale. Integration of continuous liquid ink feeding within the NFP facilitates more rapid deposition and eliminates the need for repeated dipping, all while preserving the sub-100-nanometer resolution of DPN. Microfabrication Nano fountain probes (NFPs) are fabricated on the wafer-scale using microfabrication techniques allowing for batch fabrication of numerous chips. Through the different generations of devices, design and experimentation improved the device yielding to a robust fabrication process. The highly enhanced feature dimension and shapes is expected to improve the performance in writing and imaging. Applications Direct-write nanopatterning NFP is used in the development of a to scale, direct-write nanomanufacturing platform. The platform is capable of constructing complex, highly-functional nanoscale devices from a diverse suite of materials (e.g., nanoparticles, catalysts (increase rate of reaction), biomolecules, and chemical solutions). Demonstrated nanopatterning capabilities include: • Biomolecules (proteins, DNA) for biodetection assays or cell adhesion studies • Functional nanoparticles for drug delivery studies and nanosystems making (fabrication) • Catalysts for carbon nanotube growth in nanodevice fabrication • Thiols for directed self-assembly of nanostructures. Direct in-vitro single-cell injection Taking advantage of the unique tip geometry of the NFP nanomaterials are directly injected into live cells with minimal invasiveness. This enables unique studies of nanoparticle-mediated delivery, as well as cellular pathways and toxicity. Whereas typical in vitro studies are limited to cell populations, these broadly-applicable tools enable multifaceted interrogation at a truly single cell level. See also Nanolithography References Lithography (microfabrication) Microtechnology Scanning probe microscopy Biological engineering Tissue engineering
Nanofountain probe
[ "Chemistry", "Materials_science", "Engineering", "Biology" ]
676
[ "Biological engineering", "Microtechnology", "Cloning", "Chemical engineering", "Materials science", "Tissue engineering", "Scanning probe microscopy", "Microscopy", "Nanotechnology", "Medical technology", "Lithography (microfabrication)" ]
30,724,348
https://en.wikipedia.org/wiki/Kardar%E2%80%93Parisi%E2%80%93Zhang%20equation
In mathematics, the Kardar–Parisi–Zhang (KPZ) equation is a non-linear stochastic partial differential equation, introduced by Mehran Kardar, Giorgio Parisi, and Yi-Cheng Zhang in 1986. It describes the temporal change of a height field with spatial coordinate and time coordinate : Here, is white Gaussian noise with average and second moment , , and are parameters of the model, and is the dimension. In one spatial dimension, the KPZ equation corresponds to a stochastic version of Burgers' equation with field via the substitution . Via the renormalization group, the KPZ equation is conjectured to be the field theory of many surface growth models, such as the Eden model, ballistic deposition, and the weakly asymmetric single step solid on solid process (SOS) model. A rigorous proof has been given by Bertini and Giacomin in the case of the SOS model. KPZ universality class Many interacting particle systems, such as the totally asymmetric simple exclusion process, lie in the KPZ universality class. This class is characterized by the following critical exponents in one spatial dimension (1 + 1 dimension): the roughness exponent , growth exponent , and dynamic exponent . In order to check if a growth model is within the KPZ class, one can calculate the width of the surface: where is the mean surface height at time and is the size of the system. For models within the KPZ class, the main properties of the surface can be characterized by the Family–Vicsek scaling relation of the roughness with a scaling function satisfying In 2014, Hairer and Quastel showed that more generally, the following KPZ-like equations lie within the KPZ universality class: where is any even-degree polynomial. A family of processes that are conjectured to be universal limits in the (1+1) KPZ universality class and govern the long time fluctuations are the Airy processes and the KPZ fixed point. Solving the KPZ equation Due to the nonlinearity in the equation and the presence of space-time white noise, solutions to the KPZ equation are known to not be smooth or regular, but rather 'fractal' or 'rough.' Even without the nonlinear term, the equation reduces to the stochastic heat equation, whose solution is not differentiable in the space variable but satisfies a Hölder condition with exponent less than 1/2. Thus, the nonlinear term is ill-defined in a classical sense. In 2013, Martin Hairer made a breakthrough in solving the KPZ equation by an extension of the Cole–Hopf transformation and constructing approximations using Feynman diagrams. In 2014, he was awarded the Fields Medal for this work on the KPZ equation, along with rough paths theory and regularity structures. There were 6 different analytic self-similar solutions found for the (1+1) KPZ equation with different analytic noise terms. Physical derivation This derivation is from and. Suppose we want to describe a surface growth by some partial differential equation. Let represent the height of the surface at position and time . Their values are continuous. We expect that there would be a sort of smoothening mechanism. Then the simplest equation for the surface growth may be taken to be the diffusion equation, But this is a deterministic equation, implying the surface has no random fluctuations. The simplest way to include fluctuations is to add a noise term. Then we may employ the equation with taken to be the Gaussian white noise with mean zero and covariance . This is known as the Edwards–Wilkinson (EW) equation or stochastic heat equation with additive noise (SHE). Since this is a linear equation, it can be solved exactly by using Fourier analysis. But since the noise is Gaussian and the equation is linear, the fluctuations seen for this equation are still Gaussian. This means the EW equation is not enough to describe the surface growth of interest, so we need to add a nonlinear function for the growth. Therefore, surface growth change in time has three contributions. The first models lateral growth as a nonlinear function of the form . The second is a relaxation, or regularization, through the diffusion term , and the third is the white noise forcing . Therefore, The key term , the deterministic part of the growth, is assumed to be a function only of the slope, and to be a symmetric function. A great observation of Kardar, Parisi, and Zhang (KPZ) was that while a surface grows in a normal direction (to the surface), we are measuring the height on the height axis, which is perpendicular to the space axis, and hence there should appear a nonlinearity coming from this simple geometric effect. When the surface slope is small, the effect takes the form , but this leads to a seemingly intractable equation. To circumvent this difficulty, one can take a general and expand it as a Taylor series, The first term can be removed from the equation by a time shift, since if solves the KPZ equation, then solves The second should vanish because of the symmetry of , but could anyway have been removed from the equation by a constant velocity shift of coordinates, since if solves the KPZ equation, then solves Thus the quadratic term is the first nontrivial contribution, and it is the only one kept. We arrive at the KPZ equation See also Fokker–Planck equation Fractal Quantum field theory Renormalization group Rough path Stochastic partial differential equation Surface growth Tracy–Widom distribution Universality (dynamical systems) Sources Further reading Statistical mechanics Stochastic differential equations Partial differential equations Functions of space and time
Kardar–Parisi–Zhang equation
[ "Physics" ]
1,197
[ "Spacetime", "Statistical mechanics", "Functions of space and time" ]
27,896,184
https://en.wikipedia.org/wiki/NFE2L3
Nuclear factor (erythroid 2)-like factor 3, also known as NFE2L3 or 'NRF3', is a transcription factor that in humans is encoded by the Nfe2l3 gene. Nrf3 is a basic leucine zipper (bZIP) transcription factor belonging to the Cap ‘n’ Collar (CNC) family of proteins. In 1989, the first CNC transcription factor NFE2L2 was identified. Subsequently, several related proteins were identified, including NFE2L1 and NFE2L3, in different organisms such as humans, mice, and zebrafish. These proteins are specifically encoded in the humans by Nfe2l1 and Nfe2l3 genes respectively. Gene The Nfe2l3 gene was mapped to the chromosomal location 7p15-p14 by fluorescence in situ hybridization (FISH). It covers 34.93 kB from base 26191830 to 26226754 on the direct DNA strand with an exon count of 4. The gene is found near the HOXA gene cluster, similar to the clustering of p45 NF-E2, NFE2L1, and NFE2L2 near HOXC, HOXB, and HOXD genes respectively. This implies that all four genes were likely derived from a single ancestral gene which was duplicated alongside the ancestral HOX cluster, diverging to give rise to four closely related transcription factors. The human Nfe2l3 gene encodes a 694 amino acid residue sequence. From bioinformatic analysis, it has been observed that the NRF3 protein shows a high degree of conservation through its evolutionary pathway from zebrafish to humans. Key conserved domains such as N-terminal homology box 1 (NHB1), N-terminal homology box 2 (NHB2), and the CNC domain allude to the conserved functional properties of this transcription factor. Sub-cellular location NRF3 is a membrane bound glycoprotein that can be targeted specifically to the endoplasmic reticulum (ER) and the nuclear membrane. Biochemical studies have identified three migrating endogenous forms of NRF3 proteinA, B, and Cwhich are constitutively degraded by several proteolytic mechanisms. It is known that the "A" form is glycosylated, whereas "B" is unglycosylated, and "C" is generated by cleavage of "B." In total, seven potential sites of N-linked glycosylation has been observed in the center portion of the NRF3 protein. However, further details of the three forms' location, regulation, and function in each cellular compartment remain unknown. Protein expression levels Expression levels of NRF3 proteins are highest in the placenta. more specifically in the chorionic villi (at week 12 of gestation period) Expression appears to be specific to primary placental cytotrophoblasts, not placental fibroblasts. Along with the placenta, the expression of this protein has also been observed in human choriocarcinoma cell lines which have been derived from trophoblastic tumours of the placenta. NFE2L2 has also been found in the heart, brain, lungs, kidney, pancreas, colon, thymus, leukocytes, and spleen. Very low levels of expression were found in human megakaryocytes and erythrocytes, and NRF3 expression was not observed in reproductive organs of either sex. Function The specific functions of the NRF3 protein are still unknown, but some putative functional properties have been inferred from those of NFE2L1 due to their structural similarity. It is known that NRF3 can heterodimerize with small musculo-aponeurotic fibro-sarcoma (MAF genes) factors to bind antioxidant response elements in target genes. Associated diseases RNA microarray data has shown NRF3's involvement in various malignancies, with over-expression observed in Hodgkin's lymphoma, non-Hodgkin lymphoma, and mantle cell lymphoma. NRF3 expression is also elevated in human breast cancer cells and testicular carcinoma, implying that NRF3 may play a role in inducing carcinogenesis. References Further reading Transcription factors
NFE2L3
[ "Chemistry", "Biology" ]
929
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
27,900,214
https://en.wikipedia.org/wiki/Intramolecular%20Heck%20reaction
The intramolecular Heck reaction (IMHR) in chemistry is the coupling of an aryl or alkenyl halide with an alkene in the same molecule. The reaction may be used to produce carbocyclic or heterocyclic organic compounds with a variety of ring sizes. Chiral palladium complexes can be used to synthesize chiral intramolecular Heck reaction products in non-racemic form. Introduction The Heck reaction is the palladium-catalyzed coupling of an aryl or alkenyl halide with an alkene to form a substituted alkene. Intramolecular variants of the reaction may be used to generate cyclic products containing endo or exo double bonds. Ring sizes produced by the intramolecular Heck reaction range from four to twenty-seven atoms. Additionally, in the presence of a chiral palladium catalyst, the intramolecular Heck reaction may be used to establish tertiary or quaternary stereocenters with high enantioselectivity. A number of tandem reactions, in which the intermediate alkylpalladium complex is intercepted either intra- or intermolecularly before β-hydride elimination, have also been developed. (1) Mechanism and stereochemistry The neutral pathway As shown in Eq. 2, the neutral pathway of the Heck reaction begins with the oxidative addition of the aryl or alkenyl halide into a coordinatively unsaturated palladium(0) complex (typically bound to two phosphine ligands) to give complex I. Dissociation of a phosphine ligand followed by association of the alkene yields complex II, and migratory insertion of the alkene into the carbon-palladium bond establishes the key carbon-carbon bond. Insertion takes place in a suprafacial fashion, but the dihedral angle between the alkene and palladium-carbon bond during insertion can vary from 0° to ~90°. After insertion, β-hydride elimination affords the product and a palladium(II)-hydrido complex IV, which is reduced by base back to palladium(0). (2) The cationic pathway Most asymmetric Heck reactions employing chiral phosphines proceed by the cationic pathway, which does not require the dissociation of a phosphine ligand. Oxidative addition of an aryl perfluorosulfonate generates a cationic palladium aryl complex V. The mechanism then proceeds as in the neutral case, with the difference that an extra site of coordinative unsaturation exists on palladium throughout the process. Thus, coordination of the alkene does not require ligand dissociation. Stoichiometric amounts of base are still required to reduce the palladium(II)-hydrido complex VIII back to palladium(0). Silver salts may be used to initiate the cationic pathway in reactions of aryl halides. (3) The anionic pathway Reactions involving palladium(II) acetate and phosphine ligands proceed by a third mechanism, the anionic pathway. Base mediates the oxidation of a phosphine ligand by palladium(II) to a phosphine oxide. Oxidative addition then generates the anionic palladium complex IX. Loss of halide leads to neutral complex X, which undergoes steps analogous to the neutral pathway to regenerate anionic complex IX. A similar anionic pathway is also likely operative in reactions of bulky palladium tri(tert-butyl)phosphine complexes. (4) Establishing tertiary or quaternary stereocenters Asymmetric Heck reactions establish quaternary or tertiary stereocenters. If migratory insertion generates a quaternary center adjacent to the palladium-carbon bond (as in reactions of trisubstituted or 1,1-disubstituted alkenes), β-hydride elimination toward that center is not possible and it is retained in the product. Similarly, β-hydride elimination is not possible if a hydrogen syn to the palladium-carbon bond is not available. Thus, tertiary stereocenters can be established in conformationally restricted systems. (5) Scope and limitations The intramolecular Heck reaction may be used to form rings of a variety of sizes and topologies. β-Hydride elimination need not be the final step of the reaction, and tandem methods have been developed that involve the interception of palladium alkyl intermediates formed after migratory insertion by an additional reactant. This section discusses the most common ring sizes formed by the intramolecular Heck reaction and some of its tandem and asymmetric variants. 5-Exo cyclization, which establishes a five-membered ring with an exocyclic alkene, is the most facile cyclization mode in intramolecular Heck reactions. In this and many other modes of intramolecular Heck cyclization, annulations typically produce a cis ring juncture. (6) 6-Exo cyclization is also common. The high stability of Heck reaction catalysts permits the synthesis of highly strained compounds at elevated temperatures. In the example below, the arene and alkene must both be in energetically unfavorable axial positions in order to react. (7) Endo cyclization is observed most often when small or large rings are involved. For instance, 5-endo cyclization is generally preferred over 4-exo cyclization. The yield of endo product increases with increasing ring size in the synthesis of cycloheptenes, -octenes, and -nonenes. (8) Tandem reactions initiated by IMHR have been extensively explored. Palladium alkyl intermediates generated after migratory insertion may undergo a second round of insertion in the presence of a second alkene (either intra- or intermolecular). When dienes are involved in the intramolecular Heck reaction, insertion affords π-allylpalldium intermediates, which may be intercepted by nucleophiles. This idea was applied to a synthesis of (–)-morphine. (9) Asymmetric IMHR may establish tertiary or quaternary stereocenters. BINAP is the most commonly chiral ligand used in this context. An interesting application of IMHR is group-selective desymmetrization (enantiotopic group selection), in which the chiral palladium aryl intermediate undergoes insertion predominantly with one of the enantiotopic double bonds. (10) Synthetic applications The high functional group tolerance of the intramolecular Heck reaction allows it to be used at a very late stage in synthetic routes. In a synthesis of (±)-FR900482, IMHR establishes a tricyclic ring system in high yield without disturbing any of the sensitive functionality nearby. (11) Intramolecular Heck reactions have been employed for the construction of complex natural products. An example is the late-stage, macrocyclic ring closure in the total synthesis of the cytotoxic natural product (–)-Mandelalide A. In another example a fully intramolecular tandem Heck reaction is used in a synthesis of (–)-scopadulcic acid. A 6-exo cyclization sets the quaternary center and provides a neopentyl σ-palladium intermediate, which undergoes a 5-exo reaction to provide the ring system. (12) Comparison with other methods The closest competing method to IMHR is radical cyclization. Radical cyclizations are often reductive, which can cause undesired side reactions to occur if sensitive substrates are employed. The IMHR, on the other hand, can be run under reductive conditions if desired. Unlike the IMHR, radical cyclization does not require the coupling of two sp2-hybridized carbons. In some cases, the results of radical cyclization and IMHR are complementary. Experimental conditions and procedure Typical conditions A variety of experimental concerns exist for IMHR reactions. Although most of the common Pd(0) catalysts are commercially available (Pd(PPh3)4, Pd2(dba)3, and derivatives), they may also be prepared by simple, high-yielding procedures. Palladium(II) acetate is cheap and may be reduced in situ to palladium(0) with phosphine. Three equivalents of phosphine per equivalent of palladium acetate are commonly used; these conditions generate Pd(PR3)2 as the active catalyst. Bidentate phosphine ligands are common in asymmetric reactions to enhance stereoselectivity. A wide variety of bases may be used, and the base is often employed in excess. Potassium carbonate is the most common base employed, and inorganic bases are generally used more often than organic bases. A number of additives have also been identified for the Heck reaction—silver salts may be used to drive the reaction down the cationic pathway, and halide salts may be used to convert aryl triflates via the neutral pathway. Alcohols have been shown to enhance catalyst stability in some cases, and acetate salts are beneficial in reactions following the anionic pathway. Example procedure (13) A solution of the amide (0.365 g, 0.809 mmol), Pd(PPh3)4 (0.187 g, 0.162 mmol), and triethylamine (1.12 mL, 8.08 mmol) in MeCN (8 mL) in a sealed tube was heated slowly to 120°. After stirring for 4 hours, the reaction mixture was cooled to room temperature, and the solvent was evaporated. The residue was chromatographed (loaded with CH2Cl2) to give the title product 316 (0.270 g, 90%) as a colorless oil; Rf 0.42 (EtOAc/petroleum ether 10:1); [α]22D +14.9 (c, 1.0, CHCl3); IR 3027, 2930, 1712, 1673, 1608, 1492, 1343, 1248 cm−1; 1H NMR (400 MHz) δ 7.33–7.21 (m, 6 H), 7.07 (dd, J = 7.3, 16.4 Hz, 1 H), 7.00 (t, J = 7.5 Hz, 1 H), 6.77 (d, J = 7.7 Hz, 1 H), 6.30 (dd, J = 8.7, 11.4 Hz, 1 H), 5.32 (d, J = 15.7 Hz, 1 H), 5.04 (s, 1 H), 4.95 (s, 1 H), 4.93 (d, J = 11.1 Hz, 1 H), 4.17 (s, 1 H), 3.98 (d, J = 15.7 Hz, 1 H), 3.62 (d, J = 8.7 Hz, 1 H), 3.17 (s, 3 H), 2.56 (dd, J = 3.5, 15.5 Hz, 1 H), 2.06 (dd, J = 2.8, 15.5 Hz, 1 H); 13C NMR (100 MHz) δ 177.4, 172.9, 147.8, 142.2, 136.5, 132.2, 131.6, 128.8, 128.4, 128.2, 127.7, 127.1, 123.7, 122.9, 107.9, 105.9, 61.0, 54.7, 49.9, 44.4, 38.2, 26.4; HRMS Calcd. for C24H22N2O2: 370.1681. Found: 370.1692. References Organic reactions
Intramolecular Heck reaction
[ "Chemistry" ]
2,544
[ "Organic reactions" ]
27,901,866
https://en.wikipedia.org/wiki/Zariski%20ring
In commutative algebra, a Zariski ring is a commutative Noetherian topological ring A whose topology is defined by an ideal contained in the Jacobson radical, the intersection of all maximal ideals. They were introduced by under the name "semi-local ring" which now means something different, and named "Zariski rings" by . Examples of Zariski rings are noetherian local rings with the topology induced by the maximal ideal, and -adic completions of Noetherian rings. Let A be a Noetherian topological ring with the topology defined by an ideal . Then the following are equivalent. A is a Zariski ring. The completion is faithfully flat over A (in general, it is only flat over A). Every maximal ideal is closed. References Commutative algebra
Zariski ring
[ "Mathematics" ]
169
[ "Fields of abstract algebra", "Commutative algebra" ]
27,902,093
https://en.wikipedia.org/wiki/Zariski%E2%80%93Riemann%20space
In algebraic geometry, a Zariski–Riemann space or Zariski space of a subring k of a field K is a locally ringed space whose points are valuation rings containing k and contained in K. They generalize the Riemann surface of a complex curve. Zariski–Riemann spaces were introduced by who (rather confusingly) called them Riemann manifolds or Riemann surfaces. They were named Zariski–Riemann spaces after Oscar Zariski and Bernhard Riemann by who used them to show that algebraic varieties can be embedded in complete ones. Local uniformization (proved in characteristic 0 by Zariski) can be interpreted as saying that the Zariski–Riemann space of a variety is nonsingular in some sense, so is a sort of rather weak resolution of singularities. This does not solve the problem of resolution of singularities because in dimensions greater than 1 the Zariski–Riemann space is not locally affine and in particular is not a scheme. Definition The Zariski–Riemann space of a field K over a base field k is a locally ringed space whose points are the valuation rings containing k and contained in K. Sometimes the valuation ring K itself is excluded, and sometimes the points are restricted to the zero-dimensional valuation rings (those whose residue field has transcendence degree zero over k). If S is the Zariski–Riemann space of a subring k of a field K, it has a topology defined by taking a basis of open sets to be the valuation rings containing a given finite subset of K. The space S is quasi-compact. It is made into a locally ringed space by assigning to any open subset the intersection of the valuation rings of the points of the subset. The local ring at any point is the corresponding valuation ring. The Zariski–Riemann space of a function field can also be constructed as the inverse limit of all complete (or projective) models of the function field. Examples The Riemann–Zariski space of a curve The Riemann–Zariski space of a curve over an algebraically closed field k with function field K is the same as the nonsingular projective model of it. It has one generic non-closed point corresponding to the trivial valuation with valuation ring K, and its other points are the rank 1 valuation rings in K containing k. Unlike the higher-dimensional cases, the Zariski–Riemann space of a curve is a scheme. The Riemann–Zariski space of a surface The valuation rings of a surface S over k with function field K can be classified by the dimension (the transcendence degree of the residue field) and the rank (the number of nonzero convex subgroups of the valuation group). gave the following classification: Dimension 2. The only possibility is the trivial valuation with rank 0, valuation group 0 and valuation ring K. Dimension 1, rank 1. These correspond to divisors on some blowup of S, or in other words to divisors and infinitely near points of S. They are all discrete. The center in S can be either a point or a curve. The valuation group is Z. Dimension 0, rank 2. These correspond to germs of algebraic curves through a point on a normal model of S. The valuation group is isomorphic to Z+Z with the lexicographic order. Dimension 0, rank 1, discrete. These correspond to germs of non-algebraic curves (given for example by y= a non-algebraic formal power series in x) through a point of a normal model. The valuation group is Z. Dimension 0, rank 1, non-discrete, value group has incommensurable elements. These correspond to germs of transcendental curves such as y=xπ through a point of a normal model. The value group is isomorphic to an ordered group generated by 2 incommensurable real numbers. Dimension 0, rank 1, non-discrete, value group elements are commensurable. The value group can be isomorphic to any dense subgroup of the rational numbers. These correspond to germs of curves of the form y=Σanxbn where the numbers bn are rational with unbounded denominators. References Algebraic geometry Bernhard Riemann
Zariski–Riemann space
[ "Mathematics" ]
888
[ "Fields of abstract algebra", "Algebraic geometry" ]
27,902,531
https://en.wikipedia.org/wiki/Duncan%27s%20taxonomy
Duncan's taxonomy is a classification of computer architectures, proposed by Ralph Duncan in 1990. Duncan suggested modifications to Flynn's taxonomy to include pipelined vector processes. Taxonomy The taxonomy was developed during 1988-1990 and was first published in 1990. Its original categories are indicated below. Synchronous architectures This category includes all the parallel architectures that coordinate concurrent execution in lockstep fashion and do so via mechanisms such as global clocks, central control units or vector unit controllers. Further subdivision of this category is made primarily on the basis of the synchronization mechanism. Pipelined vector processors Pipelined vector processors are characterized by pipelined functional units that accept a sequential stream of array or vector elements, such that different stages in a filled pipeline are processing different elements of the vector at a given time. Parallelism is provided both by the pipelining in individual functional units described above, as well as by operating multiple units of this kind in parallel and by chaining the output of one unit into another unit as input. Vector architectures that stream vector elements into functional units from special vector registers are termed register-to-register architectures, while those that feed functional units from special memory buffers are designated as memory-to-memory architectures. Early examples of register-to-register architectures from the 1960s and early 1970s include the Cray-1 and Fujitsu VP-200, while the Control Data Corporation STAR-100, CDC 205 and the Texas Instruments Advanced Scientific Computer are early examples of memory-to-memory vector architectures. The late 1980s and early 1990s saw the introduction of vector architectures, such as the Cray Y-MP/4 and Nippon Electric Corporation SX-3 that supported 4-10 vector processors with a shared memory (see NEC SX architecture). SIMD This scheme uses the SIMD (single instruction stream, multiple data stream) category from Flynn's taxonomy as a root class for processor array and associative memory subclasses. SIMD architectures are characterized by having a control unit broadcast a common instruction to all processing elements, which execute that instruction in lockstep on diverse operands from local data. Common features include the ability for individual processors to disable an instruction and the ability to propagate instruction results to immediate neighbors over an interconnection network. Processor array Associative memory Systolic array Systolic arrays, proposed during the 1980s, are multiprocessors in which data and partial results are rhythmically pumped from processor to processor through a regular, local interconnection network. Systolic architectures use a global clock and explicit timing delays to synchronize data flow from processor to processor. Each processor in a systolic system executes an invariant sequence of instructions before data and results are pulsed to neighboring processors. MIMD architectures Based on Flynn's multiple-instruction-multiple-data streams terminology, this category spans a wide spectrum of architectures in which processors execute multiple instruction sequences on (potentially) dissimilar data streams without strict synchronization. Although both instruction and data streams can be different for each processor, they need not be. Thus, MIMD architectures can run identical programs that are in various stages at any given time, run unique instruction and data streams on each processor or execute a combination of each these scenarios. This category is subdivided further primarily on the basis of memory organization. Distributed memory Shared memory MIMD-paradigm architectures The MIMD-based paradigms category subsumes systems in which a specific programming or execution paradigm is at least as fundamental to the architectural design as structural considerations are. Thus, the design of dataflow architectures and reduction machines is as much the product of supporting their distinctive execution paradigm as it is a product of connecting processors and memories in MIMD fashion. The category's subdivisions are defined by these paradigms. MIMD/SIMD hybrid Dataflow machine Reduction machine Wavefront array References C Xavier and S S Iyengar, Introduction to Parallel Programming Computer architecture
Duncan's taxonomy
[ "Technology", "Engineering" ]
821
[ "Computers", "Computer engineering", "Computer architecture" ]
27,902,589
https://en.wikipedia.org/wiki/Living%20cationic%20polymerization
Living cationic polymerization is a living polymerization technique involving cationic propagating species. It enables the synthesis of very well defined polymers (low molar mass distribution) and of polymers with unusual architecture such as star polymers and block copolymers and living cationic polymerization is therefore as such of commercial and academic interest. Basics In carbocationic polymerization the active site is a carbocation with a counterion in close proximity. The basic reaction steps are: A+B− + H2C=CHR → A-CH2-RHC+----B− Chain propagation: A-CH2-RHC+----B− + H2C=CHR → A-(CH2-RHC)n-CH2-RHC+----B− Chain termination: A-(CH2-RHC)n-CH2-RHC+----B− → A-(CH2-RHC)n-CH2-RHC-B chain transfer: A-(CH2-RHC)n-CH2-RHC+----B− → A-(CH2-RHC)n-CH2=CR H+B− Living cationic polymerization is characterised by defined and controlled initiation and propagation while minimizing side-reactions termination and chain transfer. Transfer and termination do occur but in ideal living systems the active ionic propagating species are in chemical equilibrium with the dormant covalent species with an exchange rate much faster than the propagation rate. Solution methods require rigorous purification of monomer and solvent although conditions are not as strict as in anionic polymerization. Common monomers are vinyl ethers, alpha-methyl vinyl ethers, isobutene, styrene, methylstyrene and N-vinylcarbazole. The monomer is nucleophilic and substituents should be able to stabilize a positive carbocationic charge. For example, para-methoxystyrene is more reactive than styrene itself. Initiation takes place by an initiation/coinitiation binary system, for example an alcohol and a Lewis acid. The active electrophile is then a proton and the counter ion the remaining alkoxide which is stabilized by the Lewis acid. With organic acetates such as cumyl acetate the initiating species is the carbocation R+ and the counterion is the acetate anion. In the iodine/hydrogen iodide system the electrophile is again a proton and the carbocation is stabilized by the triiodide ion. Polymerizations with diethylaluminium chloride rely on trace amounts of water. A proton is then accompanied by the counterion Et2AlClOH−. With tert-butyl chloride Et2AlCl abstracts a chlorine atom to form the tert-butyl carbocation as the electrophile. Efficient initiators that resemble the monomer are called cationogens. Termination and chain transfer are minimized when the initiator counterion is both non-nucleophilic and non-basic. More polar solvents promote ion dissociation and hence increase molar mass. Common additives are electron donors, salts and proton traps . Electron donors (e.g. nucleophiles, Lewis bases) for example dimethylsulfide and dimethylsulfoxide are believed to stabilize the carbocation. The addition of salt for example a tetraalkylammonium salt, prevents dissociation of the ion pair that is the propagating reactive site. Ion dissociation into free ions lead to non-living polymerization. Proton traps scavenge protons originating from protic impurities. History The method was developed starting in the 1970s and 1980s with contributions from Higashimura on the polymerization of p-methoxystyrene using iodine or acetyl perchlorate, on the polymerization of isobutyl vinyl ether by iodine and with Mitsuo Sawamoto by iodine/HI and on the formation of p-methoxystyrene - isobutyl vinyl ether block copolymers. Kennedy and Faust studied methylstyrene / boron trichloride polymerization (then called quasi-living) in 1982 and that of isobutylene (system with cumyl acetate, 2,4,4-trimethylpentane-2-acetate and BCl3) in 1984 Around same time Kennedy and Mishra discovered very efficient living polymerization of isobutylene (system with Tertiary Alkyl (or Aryl) Methyl Ether and BCl3)[ that paved the way for rapid development of macromolecularly engineered polymers. Isobutylene polymerization Living isobutylene polymerization typically takes place in a mixed solvent system comprising a non-polar solvent, such as hexane, and a polar solvent, such as chloroform or dichloromethane, at temperatures below 0 °C. With more polar solvents polyisobutylene solubility becomes a problem. Initiators can be alcohols, halides and ethers. Coinitiators are boron trichloride, tin tetrachloride and organoaluminum halides. With ethers and alcohols the true initiator is the chlorinated product. Polymer with molar mass of 160,000 g/mole and polydispersity index 1.02 can be obtained. Vinyl ether polymerization Vinyl ethers (CH2=CHOR, R = methyl, ethyl, isobutyl, benzyl) are very reactive vinyl monomers. Studied systems are based on I2/HI and on zinc halides zinc chloride, zinc bromide and zinc iodide. Living cationic ring-opening polymerization In Living cationic ring-opening polymerization the monomer is a heterocycle such as an epoxide, THF, an oxazoline or an aziridine such as t-butylaziridine. The propagating species is not a carbocation but an oxonium ion. Living polymerization is more difficult to achieve because of the ease of termination by nucleophilic attack of a heteroatom in the growing polymer chain. Intramolecular termination is called backbiting and results in the formation of cyclic oligomers. Initiators are strong electrophiles such as triflic acid. Triflic anhydride is an initiator for bifunctional polymer. References Polymerization reactions
Living cationic polymerization
[ "Chemistry", "Materials_science" ]
1,389
[ "Polymerization reactions", "Polymer chemistry" ]
27,903,610
https://en.wikipedia.org/wiki/Zariski%20space
In algebraic geometry, a Zariski space, named for Oscar Zariski, has several different meanings: A topological space that is Noetherian (every open set is quasicompact) A topological space that is Noetherian and also sober (every nonempty closed irreducible subset is the closure of a unique point). The spectrum of any commutative Noetherian ring is a Zariski space in this sense A Zariski–Riemann space of valuations of a field Algebraic geometry
Zariski space
[ "Mathematics" ]
108
[ "Fields of abstract algebra", "Algebraic geometry" ]
27,904,054
https://en.wikipedia.org/wiki/Electric%20vehicle%20warning%20sounds
Electric vehicle warning sounds are sounds designed to alert pedestrians to the presence of electric drive vehicles such as hybrid electric vehicles (HEVs), plug-in hybrid electric vehicles (PHEVs), and battery electric vehicles (BEVs) travelling at low speeds. Warning sound devices were deemed necessary by some government regulators because vehicles operating in all-electric mode produce less noise than traditional combustion engine vehicles and can make it more difficult for pedestrians and cyclists (especially those with visual impairments) to be aware of their presence. Warning sounds may be driver triggered (as in a horn but less urgent) or automatic at low speeds; in type, they vary from clearly artificial (beeps, chimes) to those that mimic engine sounds and those of tires moving over gravel. Japan issued guidelines for such warning devices in January 2010 and the U.S. approved legislation in December 2010. The U.S. National Highway Traffic Safety Administration issued its final ruling in February 2018, and requires the device to emit warning sounds when travelling at speeds less than with compliance by September 2020, but 50% of "quiet" vehicles must have the warning sounds by September 2019. In April 2014, the European Parliament approved legislation that requires the mandatory use of an (AVAS). Manufacturers must install an AVAS system in four-wheeled electric and hybrid electric vehicles that are approved from July 1, 2019, and to all new quiet electric and hybrid vehicles registered from July 2021. The vehicle must make a continuous noise level of at least 56 dBA (within 2 meters) if the car is going 20 km/h (12 mph) or slower, and a maximum of 75 dBA. Several automakers have developed electric warning sound devices, and since December 2011 advanced technology cars available in the market with manually activated electric warning sounds include the Nissan Leaf, Chevrolet Volt, Honda FCX Clarity, Nissan Fuga Hybrid/Infiniti M35, Hyundai Sonata Hybrid, and the Toyota Prius (Japan only). Models equipped with automatically activated systems include the 2014 BMW i3 (option not available in the US), 2012 model year Toyota Camry Hybrid, 2012 Lexus CT200h, all EV versions of the Honda Fit, and all Prius family cars recently introduced in the United States, including the standard 2012 model year Prius, the Toyota Prius v, Prius c and the Toyota Prius Plug-in Hybrid. The 2013 Smart electric drive, optionally, comes with automatically activated sounds in the U.S. and Japan and manually activated in Europe. Background As a result of increased sales of full electric vehicle and hybrid electric vehicles in several countries, some members of the blind community have raised concerns about the noise reduction when those vehicles operate in all-electric mode, as blind people or the visually impaired consider the noise of combustion engines a helpful aid while crossing streets and think quiet hybrids could pose an unexpected hazard. Although a 2009 study found no statistically significant difference in pedestrian crashes involving quiet hybrid vehicles when compared to noisier vehicles when both types of vehicles were travelling in a straight line, it found a doubling of hybrid vehicle pedestrian crashes when reversing or parking etc. at slow speeds. This problem is not exclusive to electric vehicles. In 2007 research at the Technical University Munich showed that ordinary vehicles in background noise are often detected too late for safe accident avoidance. The researchers measured the distance at which vehicles approaching pedestrians became audible with minimal background noise. These distances were then compared to the stopping distances of the respective cars and an algorithm was proposed to estimate them based on auditory masking. Research conducted at the University of California, Riverside in 2008 found that hybrid cars are so quiet when operating in electric mode (EV mode) that they may pose a risk to pedestrians and cyclists, especially the blind, children and the elderly, as they may have only one or two seconds, depending on the context, to audibly detect the location of approaching hybrid cars when the vehicles operate at very slow speeds. This research project was funded by the National Federation of the Blind. The experiment consisted of making audio recordings of a Toyota Prius and combustion engine Honda Accord approaching from two directions at to ensure that the hybrid car operated only with its electric motor. Then test subjects in a laboratory listened to the recordings and indicated when they could hear from which direction the cars approached. Subjects could locate the hum of the internal combustion engine car at away, but could not identify the hybrid running in electric mode until it came within , leaving just less than two seconds to react before the vehicle reached their position. In a second trial, the background sounds of two quietly idling combustion engine cars were added to the recordings to simulate the noise of a parking lot. Under this condition, the hybrid needed to be 74 percent closer than the conventional car before the subjects could hear from which direction the cars approached. Subjects could correctly judge the approach of the combustion car when it was about away. This result means that under closer to normal environmental noise, a pedestrian would not be able to correctly determine the hybrid's approach until it was one second away. A separate 2008 study from Western Michigan University found that hybrids and conventional vehicles are equally safe when travelling more than about , because tire and wind noise generate most of the audible cues at those speeds. Hybrid cars were also tested safe when moving off at traffic lights and it was found that under this condition they do not pose a risk to pedestrians. All Prius models used in the study engaged their internal combustion engines when accelerating from a standstill and produced enough noise to be detected. A 2009 study conducted by the U.S. National Highway Traffic Safety Administration found that crashes involving pedestrians and bicyclists have higher incidence rates for hybrid electric vehicles than internal combustion engine (ICE) vehicles in low-speed vehicle manoeuvres such as reversing or leaving a parking zone. These accidents commonly occurred in zones with low speed limits, during daytime and in clear weather. The study found that a HEV was two times more likely to be involved in a pedestrian crash than was a conventional ICE vehicle when a vehicle is slowing or stopping, backing up, or entering or leaving a parking space. Vehicle manoeuvres were grouped in one category considering those manoeuvres that might have occurred at very low speeds where the difference between the sound levels produced by the hybrid versus ICE vehicle is the greatest. Also the study found that the incidence rate of pedestrian crashes in scenarios when vehicles make a turn was significantly higher for HEVs when compared to ICE vehicles. Similarly, The NHTSA study also concluded that the incidence rate of bicyclist crashes involving HEVs for the same kind of manoeuvres was significantly higher when compared to conventional vehicles. In September 2010, Volvo Cars and Vattenfall, a Swedish energy company, issued a report regarding the results of the first phase of the Volvo V70 Plug-in Hybrid demonstration program. Among other findings, before the trial drivers participating in the field testing were concerned about being a danger to pedestrians and cyclists due to the quietness of the electric-drive vehicle. After the test several of them changed their opinion and said that this issue was less of a problem than they expected. Nevertheless, some test drivers said they experienced incidents of not being noticed while others said they had taken extra care in their driving with regard to this issue. Regulations Since 2009 the Japanese government, the U.S. Congress and the European Commission are exploring legislation to establish a minimum level of sound for plug-in electric and hybrid electric vehicles when operating in electric mode, so that blind people and other pedestrians and cyclists can hear them coming and detect from which direction they are approaching. Tests have shown that vehicles operating in electric mode can be particularly hard to hear below . European Union In 2011 the European Commission drafted a guideline for acoustic vehicle alerting systems (AVAS). The goal is to present recommendations to manufacturers for a system to be installed in vehicles to provide an audible signal to pedestrians and vulnerable road users. This interim guideline is intended to provide guidance until the completion of on-going research activities and the development of globally harmonised device performance specifications. The guidelines are intended for hybrid electric and pure electric highway-capable vehicles. The guideline recommends that the AVAS should automatically generate a continuous sound in the minimum range of vehicle speed from start-up to approximately and during reversing, if applicable for that vehicle category, and lists the types of sounds that are not acceptable. It also states that the AVAS may have a pause switch to stop its operation temporarily. On 6 February 2013, the European Parliament approved a draft law to tighten noise limits for cars to protect public health, and also to add alerting sounds to ensure the audibility of hybrid and electric vehicles to improve the safety of vulnerable road users in urban areas, such as blind, visually and auditorily challenged pedestrians, cyclists and children. The draft legislation states a number of tests, standards and measures that must first be developed for acoustic vehicle alerting systems (AVAS) to be compulsory in the future. Now an agreement has to be negotiated with European Union countries. The approved amendment establishes that "the sound to be generated by the AVAS should be a continuous sound that provides information to the pedestrians and vulnerable road users of a vehicle in operation. The sound should be easily indicative of vehicle behaviour and should sound similar to the sound of a vehicle of the same category equipped with an internal combustion engine." In April 2014 the European Parliament approved the legislation (Regulation (EU) No 540/2014) that requires the acoustic vehicle alerting systems, which is mandatory for all new electric and hybrid electric vehicles. The new rule established a transitional period of 5 years after publication of the final approval of the April 2014 proposal to comply with the regulation. Japan Beginning in July 2009 the Japanese government began assessing possible countermeasures through the Committee for the Consideration of Countermeasures Regarding Quiet Hybrid and Other Vehicles, and in January 2010 the Ministry of Land, Infrastructure, Transport and Tourism issued guidelines for hybrid and other near-silent vehicles. China Beginning in December 2018, the Chinese government explored guidelines regarding an acoustic vehicle alerting system of electric vehicles running at low speed, and implemented them in September 2019. United Kingdom The Department for Transport (DfT) commissioned research to gather statistics on accidents involving electric vehicles with pedestrians who are blind or vision impaired to determine whether the perceived accident risk is real and whether electric and hybrid cars are more difficult to detect audibly than conventional internal combustion engine vehicles. The DfT goal was to use the findings to establish what sort of sound should be fitted to electric vehicles. The research was conducted by the Transport Research Laboratory, and the findings were published in 2011. The study found little correlation between the rate of accidents with pedestrians and noise level for the majority of vehicles. In addition, the analysis found no evidence of a pattern in accident rate when only considering those accidents occurring on or slower roads, or where the pedestrian was disabled. A previous study did not find an increased pedestrian accident rate for electric and hybrid vehicles with respect to their conventional counterparts, which raised the question as to whether added sound is necessarily required. The study also noted that some modern conventional cars are as quiet as their electric counterparts, even at low speeds. UK organisation The Guide Dogs for the Blind Association lobbied members of the European Parliament to vote in favour of legislation to make the installation of artificial sound generators mandatory on quiet electric and hybrid vehicles. United States The Pedestrian Safety Enhancement Act of 2010 was approved by the U.S. Senate by unanimous consent on December 9, 2010 and passed by the House of Representatives by 379 to 30 on December 16, 2010. The act does not stipulate a specific speed for the simulated noise but requires the U.S. Department of Transportation to study and establish a motor vehicle safety standard that would set requirements for an alert sound that allows blind and other pedestrians to reasonably detect a nearby electric or hybrid vehicle, and the ruling must be finalised within eighteen months. The bill was signed into law by President Barack Obama on January 4, 2011. A proposed rule was published for comment by the National Highway Traffic Safety Administration (NHTSA) in January, 2013. It would require hybrids and electric vehicles travelling at less than to emit warning sounds that pedestrians must be able to hear over background noises. The agency selected 30 km/h as the limit because, according to NHTSA measurements, this is the speed at which the sound levels of the hybrid and electric vehicles are approximately equivalent to the sound levels produced by similar internal combustion vehicles. According to the NHTSA proposal, car manufacturers would be able to pick the sounds the vehicles make from a range of choices, and similar vehicles would have to make the same sounds. The rules were scheduled to go into effect in September 2014. The NHTSA estimates that the new warning noises would prevent 2,800 pedestrian and cyclist injuries during the life of each model year electric and hybrid vehicle. In February 2013, the Association of Global Automakers and the Alliance of Automobile Manufacturers, which submitted a joint comment to the NHTSA, announced their support to the rule, but asked the NHTSA to find a noise level that effectively alerts pedestrians without being excessively loud to others inside and outside of the vehicle. They also commented that the rule is too complicated, unnecessarily prescriptive, and it will cost more than necessary. Some automakers also said there is no need for electric-drive vehicles to play sounds while not in motion, "since it is not clear that it helps pedestrians to hear cars that are stopped in traffic or parked." In addition, the vehicle manufacturers requested the NHTSA to make the new sound system required by 2018 instead of 2014. In January 2015, the NHTSA rescheduled the date for a final ruling to the end of 2015. Since the regulation comes into force three years after being rendered as a final rule, compliance was delayed to 2018. In November 2015, the NHTSA rescheduled one more time because additional coordination was necessary. A final ruling was delayed at least until mid-March 2016. After several additional delays, the National Highway Traffic Safety Administration issued its final ruling in February 2018. It requires hybrids and electric vehicles travelling at less than to emit warning sounds that pedestrians must be able to hear over background noises. The regulation requires full compliance in September 2020, but 50% of "quiet" vehicles must have the warning sounds by September 2019. Specific systems Enhanced Vehicle Acoustics Enhanced Vehicle Acoustics (EVA), a company based in Silicon Valley, California and founded by two Stanford students with the help of seed money from the National Federation of the Blind, developed an after market technology called "Vehicular Operations Sound Emitting Systems" (VOSES). The device makes hybrid electric vehicles sound more like conventional internal combustion engine cars when the vehicle goes into the silent electric mode (EV mode), but at a fraction of the sound level of most vehicles. At speeds higher than between to the sound system shuts off. The system also shuts off when the hybrid combustion engine is active. VOSES uses miniature, all-weather audio speakers that are placed on the hybrid's wheel wells and emit specific sounds based on the direction the car is moving in order to minimize noise pollution and to maximize acoustic information for pedestrians. If the car is moving forward, the sounds are only projected in the forward direction; and if the car is turning left or right, the sound changes on the left or right appropriately. The company argues that "chirps, beeps and alarms are more distracting than useful", and that the best sounds for alerting pedestrians are car-like, such as "the soft purr of an engine or the slow roll of tires across pavement." One of the EVA's external sound systems was designed specifically for the Toyota Prius. ECTunes ECTunes is developing a system that utilises directional sound equipment to emit noise when and where it is needed. According to the company, its technology sends audible signals only in the direction of travel, thus allowing the vehicle to be heard by those who may be in the car's path, without disturbing others with unwelcome noise. Insero Horsens, a Danish venture company, has provided a significant investment to help ECTunes fully develop its technology. The ECTunes system, and most others so far disclosed, use a control box, with software, digital amplifiers and weather-friendly external speakers. ECTunes' system connects to the car and reads speed and acceleration, shutting down when the car reaches Cross-over speed as set by existing regulation as well as regulation under development such as Quiet Road Transport Vehicles (QRTV), at which point the tires and wind are making noise of their own. The company is currently selling products to OEMs, mainly small series production, and to the after market, and has also a new mass production unit in prototype stage The company ceased operations in 2016. Fisker Automotive Fisker Automotive developed a sound-generator that was incorporated in its Fisker Karma luxury plug-in hybrid electric vehicle, released in 2011. According to the car manufacturer, the sound is designed to both alert pedestrians and enhance the driver experience, and the warning noise will be emitted automatically. The Fisker Karma emits a sound through a pair of external speakers embedded in the bumper. According to a company spokesman the sound is a mix between a "Formula One car and a starship". The developing process took between nine months to a year, and three sound companies sent in synthesised WAV file samples that were evaluated by Fisker employees and executives. The prospective sounds were studied in an audio chamber to allow engineers to evaluate the sounds without other noise interfering. After testing the candidate sounds in different locations relative to the vehicle, Fisker fine-tuned the final sound with its own equipment. The warning sound is activated when the car is travelling at less than . Ford The 2012 Ford Focus Electric was planned to include warning sounds for pedestrians. Ford Motor Company developed four alternative sounds, and in June 2011 involved the electric car fans by asking them to pick their favorite from the four potential warning sounds through the Focus Electric Facebook page. However, ultimately Ford decided to hold off including warning sounds unless federal legislation required it, and no such system was implemented on the production vehicle. General Motors General Motors' first commercially available plug-in hybrid electric vehicle, the Chevrolet Volt, introduced in December 2010, includes warning sounds for pedestrians. GM's system is called Pedestrian-Friendly Alert System and it is manually activated by the driver, but future generations will probably include an active system. The automaker conducted a test with a group of the visually challenged at Milford Proving Grounds in order to evaluate the audible warning systems on the Volt when a pedestrian is in the car's proximity. The system uses the car's horn to emit a series of warning chirps, like a low tone of a horn, enough to provide an alert but not to startle. According to GM engineers, the biggest challenge is "developing an active system that can distinguish a pedestrian from another vehicle"; otherwise, the sound will go off frequently, producing noise pollution instead. Hyundai Hyundai developed a warning noise called the Virtual Engine Sound System (VESS). The system, which was introduced in September 2010 on its test fleet of BlueOn electric hatchbacks, provides synthetic audio feedback mimicking the sound of an idling internal combustion engine. The 2011 Hyundai Sonata Hybrid is the first mass production car manufactured by Hyundai to include the warning sound system. In 2010 the car manufacturer decided to have a button on the Sonata Hybrid's instrument panel to turn the VESS on and off, but after the enactment of the Pedestrian Safety Enhancement Act of 2010, signed into law by President Obama in early 2011, and learning that the U.S. National Highway Traffic Safety Administration would not allow such switches to avoid the noise device to be turned off, Hyundai decided not to install the button, and the first Sonata Hybrids destined for the U.S. market had to be altered to remove the switch. Kia Kia Niro HEV models sold in the US and UK in 2020/21 have been highly criticised by owners for the loud and antisocial reversing alert sounds, which can be heard from many 100s of feet away and yet are emitted from the front of the car. Lotus Engineering Lotus Engineering, a consultancy group of British sports car manufacturer Lotus Cars, partnered in 2009 with Harman Becker, a producer of audio systems, to develop and commercialise a synthetic automotive audio systems. Lotus has worked on a number of hybrid and electric vehicles and its engineers thought they would be safer if these vehicles made a noise while moving around the factory. Originally developed to cancel out intrusive noises inside a car, the noise cancelling system was adapted so that it could also simulate engine sounds that change with speed and use of the throttle, providing audible "feedback" to drivers of vehicles with a silent engine. At the same time, and through the addition of external speakers, the sound system allows pedestrians to hear the noise too, but optionally there can be a different sound within the car from the one that is emitted for the outside. Lotus used a Toyota Prius to demonstrate the device but did not indicate if it intended to bring this technology to market. Lotus' synthetic sound system was incorporated in the Lotus Evora 414E Hybrid, a concept plug-in hybrid unveiled at the 2010 Geneva Motor Show. The system, called HALOsonic Internal and External Electronic Sound Synthesis, is a suite of noise solutions that uses patented technologies from Lotus and Harman International. The audio system generates engine sounds inside the vehicle through the audio system. The system also generates the external sound through speakers mounted at the front and rear to provide a warning to increase pedestrian safety. The system comes with four driver-selectable engine sounds, two of which have been designed to have characteristics of a multi-cylinder conventional V6 and V12 engine. Nissan Vehicle Sound for Pedestrians (VSP) is a Nissan-developed warning sound system in electric vehicles. The Nissan Leaf was the first car manufactured by Nissan to include VSP, and the electric car includes one sound for forward motion and another for reverse. The VSP was also used in the Nissan Fuga hybrid launched in 2011. The system developed makes a noise easy to hear for pedestrians to be aware of the vehicle approaching, but the warning sounds do not distract the car occupants inside. Nissan explained that during the development of the sound they studied behavioural research of the visually impaired and worked with cognitive and acoustic psychologists, including the National Federation of the Blind (NFB), the Detroit Institute of Ophthalmology, experts from the Vanderbilt University Medical Center and a Hollywood sound design studio. Nissan's Vehicle Sound for Pedestrians is a sine-wave sound system that sweeps from 2.5 kHz at the high end to a low of 600 Hz, a range that is easily audible across age groups. Depending on the speed and whether the Leaf is accelerating or decelerating, the sound system will make sweeping, high-low sounds. For example, when the Leaf is started the sound will be louder, and when the car is in reverse, the system will generate an intermittent sound. The sound system ceases operation when the Nissan Leaf reaches and engages again as the car slows to under . For the 2011 Leaf, the driver could turn off sounds temporarily through a switch inside the vehicle, but the system automatically reset to "On" at the next ignition cycle. The system is controlled through a computer and synthesizer in the dash panel, and the sound is delivered through a speaker in the front driver's side wheel well. Nissan said that there were six or seven finalist sounds, and that sound testing included driving cars emitting various sounds past testers standing on street corners, who indicated when they first heard the approaching car. Nissan removed the ability to disable the pedestrian alert between model year 2011 and 2012 in anticipation of the U.S. ruling to be issued by the National Highway Traffic Safety Administration. After Nissan's new sounds were publicised, the U.S. National Federation of the Blind issued a statement saying that "while it was pleased that the alert existed, it was unhappy that the driver was able to turn it off." The NFB approves the Nissan Leaf's forward motion sound, but it said the forward noise should also be used for reversing because the "intermittent sound is not as effective as a continuous sound" and that the car should emit warning sounds when it is idling, not only when it's moving slowly. Nevertheless, their main complaint is that they don't think the driver should be able to switch the sound off. The Leaf's electric warning sound had to be removed for cars delivered in the U.K., as the country's law mandates that any hazard warning sound must be capable of being disabled between 11:00 pm and 6:00 am, and the Leaf's audible warning system does not allow for such temporary deactivation. For the 2014 UK model of the car, the VSP system is enabled by default, though a button on the dash permits drivers to disable the system until the next time the car is switched on. Tesla Tesla, Inc. introduced a Pedestrian Warning System feature in September 2019 that emits warning sounds when the vehicle is traveling below 19mph/32km/h. In 2021 Tesla announced plans to retrofit the system onto select older Model 3 and Model Y vehicles from 2019. The feature is currently available on all Tesla models: Tesla Model S, Tesla Model 3, Tesla Model X, and Tesla Model Y. Toyota Toyota Motor Company teamed up with Fujitsu Ten to develop an automatic warning system for hybrids and electric vehicles to alert pedestrians when the car is propelled by its electric motor. The companies also studied the development of a system that would change the alarm's tune and volume with the assistance of an obstacle-detection radar. In August 2010 Toyota began sales of an onboard device designed to automatically emit a synthesised sound of an electric motor when the Prius is operating as an electric vehicle at speeds up to approximately . The device will be available in Japan through authorised Toyota dealers and Toyota genuine parts & accessories distributors for retrofitting on the third-generation Prius at a price of (~) including the consumption tax. The alert sound rises and falls in pitch according to the vehicle's speed, thus helping indicate the vehicle's proximity and movement to nearby pedestrians. Toyota is planning to use other versions of the device for use in hybrid electric vehicles, plug-in hybrids, electric vehicles as well as fuel-cell hybrid vehicles planned for mass production. The device meets the 2010 government regulations issued for hybrid and other near-silent vehicles. Toyota's Vehicle Proximity Notification System (VPNS) was introduced in the United States in all 2012 model year Prius family vehicles, including the Prius v, Prius Plug-in Hybrid and the standard Prius. The system is being introduced to comply with the Pedestrian Safety Enhancement Act of 2010. Volkswagen Volkswagen offers a so-called e-Sound module on its electric and hybrid vehicles such as the e-Up, e-Golf and the GTE hybrid range. It provides a pedestrian warning sound up to 30 km/h. Other manufacturers Think Global, a manufactures of electric cars already in the market, is assessing this safety issue. Ford Motor Company is developing a system for emitting external sounds to future hybrids and electrics, including its Focus BEV, scheduled for 2011, and a next-generation hybrid and plug-in hybrid vehicle planned for 2012. Nancy Gioia, Ford's Director for Global Electrification commented that "car companies should consider standardising tones from future hybrids and electrics to avoid a cacophony of confusion on the streets." Criticism and controversy Several anti-noise and electric car advocates have opposed the introduction of artificial sounds as warning for pedestrians, as they argue that the proposed system will only increase noise pollution. They also opposed U.S. pending legislation that would require generated warning sounds with no off switch for the driver. Robert S. Wall Emerson of Western Michigan University has argued that several high-end gasoline-powered luxury cars are already quieter than hybrids, and according to his most recent studies, hybrid SUVs were noisier than many internal-combustion vehicles. He concludes that pedestrian safety is not a hybrid issue but rather "a quiet car issue." Market availability , most of the hybrids and plug-in electric and hybrids sold make warning noises using a speaker system. Tesla Motors, Volkswagen and BMW do not currently include warning sounds in their electric-drive vehicles, as all of them decided to add artificial sounds only when required by regulation. See also References External links Text of S. 841 - 111th: Pedestrian Safety Enhancement Act of 2010 - Signed Jan 2011 Test of U.S. Federal Motor Vehicle Safety Standard No. 141: Minimum Sound Requirements for Hybrid and Electric Vehicles, NHTSA (November 2016) Incidence of Pedestrian and Bicyclist Crashes by Hybrid Electric Passenger Vehicles, NHTSA (2009) Sample sounds for the proposed U.S. ruling, NHTSA (2013) Regulation (EU) No 540/2014 of the European Parliament and of the Council of 16 April 2014 on the sound level of motor vehicles and of replacement silencing systems Check info of any vehicle by VIN number Videos with demos of warning sounds Chevrolet Volt Pedestrian Friendly Alert System Lotus and Harman HALOsonic - Prius demonstrator Nissan Vehicle Sound for Pedestrians (VSP) Toyota Prius Approaching Vehicle Audible System Toyota Prius Vehicle Proximity Notification System (VPNS) Z-Audio marketplace for sound creation Electric vehicle technologies Warning systems Human–machine interaction Sound production
Electric vehicle warning sounds
[ "Physics", "Technology", "Engineering", "Biology" ]
6,029
[ "Machines", "Behavior", "Safety engineering", "Measuring instruments", "Physical systems", "Human–machine interaction", "Warning systems", "Design", "Human behavior" ]
38,726,329
https://en.wikipedia.org/wiki/Serape%20effect
The serape effect is a rotational trunk movement that increases the power output of the human body. It is trained in sports that involve rotation of the torso, such as boxing and discus throwing. The muscles involved in the serape effect are stretched and then snap-back with increased strength. It is named after a piece of clothing called the serape. History The term serape originates from a piece of clothing worn by people of Latin-American countries, specifically Mexico, also known by the same name. A serape is a brightly colored blanket which hangs around the shoulders and crosses diagonally across the anterior portion of the trunk. The general direction of how a serape is worn is similar to the direction of the pull of four muscles in the same area. The serape effect is this group of four muscles working together to produce an opposition of the rib cage and pelvis in the wind-up of a motion, and finally, generate a large summation of internal forces from the snap-back. The serape effect is prevalent in ballistic motions like throwing, kicking, and swinging. Muscles involved The rhomboids, serratus anterior, external obliques, and internal obliques are involved in the serape effect. Sport significance The serape effect is important in throwing motions and motions that involve the rotation of the torso that have a high velocity (Northrip, Logan, McKinney, 1974). This includes ballistic motions such as with throwing a discus or javelin. The transverse rotation of the pelvic girdle prior to a ballistic throwing motion is important for creating a higher velocity in the direction of the motion. Without this pelvic girdle rotation prior to the ballistic movement then the pelvis will recoil and there will not be as a great of a velocity to the upper body during the ballistic motion because of a lack of stretching of the muscles and a lack of energy built up to contribute to the movement. The rotational movement of this larger body segment, the trunk, enables a summation of internal forces that is able to be transferred from this large area to a smaller area as such as the arm and the hand for throwing an object. The serape effect can also be applied to kicking by transferring these forces from the trunk and pelvis to the lower legs. For a throwing motion when the throwing limb is diagonally abducted and laterally rotated then the rib cage and pelvis should be at their farthest distance apart, which allows for a maximal amount of stretch in the muscles involved in the serape effect. This maximum point of stretching of the muscles lengthens the muscles so that when the throw takes place the muscles create a maximum amount of force as they shorten back to a resting length. “Muscles must be placed on their longest length in order to exert their greatest force”. References Earp, Jacob E, M.A., C.S.C.S., & Kraemer, William J, PhD, C.S.C.S.D., F.N.S.C.A. (2010). Medicine ball training implications for rotational power sports. Strength and Conditioning Journal, 32(4), 20-25 Biomechanics Motor control
Serape effect
[ "Physics", "Biology" ]
651
[ "Biomechanics", "Behavior", "Mechanics", "Motor control" ]
38,728,858
https://en.wikipedia.org/wiki/Distance%20between%20two%20parallel%20lines
The distance between two parallel lines in the plane is the minimum distance between any two points. Formula and proof Because the lines are parallel, the perpendicular distance between them is a constant, so it does not matter which point is chosen to measure the distance. Given the equations of two non-vertical parallel lines the distance between the two lines is the distance between the two intersection points of these lines with the perpendicular line This distance can be found by first solving the linear systems and to get the coordinates of the intersection points. The solutions to the linear systems are the points and The distance between the points is which reduces to When the lines are given by the distance between them can be expressed as See also Distance from a point to a line References Abstand In: Schülerduden – Mathematik II. Bibliographisches Institut & F. A. Brockhaus, 2004, , pp. 17-19 (German) Hardt Krämer, Rolf Höwelmann, Ingo Klemisch: Analytische Geometrie und Lineare Akgebra. Diesterweg, 1988, , p. 298 (German) External links Florian Modler: Vektorprodukte, Abstandsaufgaben, Lagebeziehungen, Winkelberechnung – Wann welche Formel?, pp. 44-59 (German) A. J. Hobson: “JUST THE MATHS” - UNIT NUMBER 8.5 - VECTORS 5 (Vector equations of straight lines), pp. 8-9 Euclidean geometry Distance
Distance between two parallel lines
[ "Physics", "Mathematics" ]
320
[ "Physical quantities", "Distance", "Quantity", "Size", "Space", "Spacetime", "Wikipedia categories named after physical quantities" ]
38,731,290
https://en.wikipedia.org/wiki/Alternated%20octagonal%20tiling
In geometry, the tritetragonal tiling or alternated octagonal tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbols of {(4,3,3)} or h{8,3}. Geometry Although a sequence of edges seem to represent straight lines (projected into curves), careful attention will show they are not straight, as can be seen by looking at it from different projective centers. Dual tiling In art Circle Limit III is a woodcut made in 1959 by Dutch artist M. C. Escher, in which "strings of fish shoot up like rockets from infinitely far away" and then "fall back again whence they came". White curves within the figure, through the middle of each line of fish, divide the plane into squares and triangles in the pattern of the tritetragonal tiling. However, in the tritetragonal tiling, the corresponding curves are chains of hyperbolic line segments, with a slight angle at each vertex, while in Escher's woodcut they appear to be smooth hypercycles. Related polyhedra and tiling See also Circle Limit III Square tiling Uniform tilings in hyperbolic plane List of regular polytopes References John Horton Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) External links Douglas Dunham Department of Computer Science University of Minnesota, Duluth Examples Based on Circle Limits III and IV, 2006:More “Circle Limit III” Patterns, 2007:A “Circle Limit III” Calculation, 2008:A “Circle Limit III” Backbone Arc Formula Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Hyperbolic tilings Isogonal tilings Uniform tilings Octagonal tilings
Alternated octagonal tiling
[ "Physics" ]
397
[ "Isogonal tilings", "Tessellation", "Hyperbolic tilings", "Uniform tilings", "Symmetry" ]
38,731,303
https://en.wikipedia.org/wiki/Boom%20method
Boom method (aka Boom nucleic acid extraction method) is a solid phase extraction method for isolating nucleic acid from a biological sample. This method is characterized by "absorbing the nucleic acids (NA) to the silica beads". Overview The Boom method (Boom nucleic acid extraction method) is a solid phase extraction method for isolating nucleic acids (NA) from biological samples. Silica beads are a key element to this method, which are capable of binding the nucleic acids in the presence of a chaotropic substance according to the chaotropic effect. This method is one of the most widespread methods for isolating nucleic acids from biological samples and is known as a simple, rapid, and reliable method for the small-scale purification of nucleic acid from biological sample. This method is said to have been developed and invented by Willem R. Boom et al. around 1990. While the chaotropic effect was previously known and reported by other scientists, Boom et al. contributed an optimization of the method to complex starting materials, such as body fluids and other biological starting materials, and provided a short procedure according to the Boom et al. US5234809. After the Boom et al. patent was filed, similar applications were also filed by other parties. In a narrow sense, the word "silica" meant SiO2 crystals; however, other forms of silica particles are available. In particular, amorphous silicon oxide and glass powder, alkylsilica, aluminum silicate (zeolite), or, activated silica with -NH2, are all suitable as nucleic acid binding solid phase material according to this method. Today, the concepts of the Boom method, characterized by utilizing magnetic silica particles, are widely used. With this method, magnetic silica beads are captured by a magnetic bead collector, such as the Tajima pipette, Pick pen(R), Quad Pole collector, and so on. Brief procedure The fundamental process for isolating nucleic acid from starting material of Boom method consists of the following 4 steps (See Fig. 1). (a) Lysing and/or Homogenizing the starting material. Lysate of starting material is obtained by addition of a detergent in the presence of protein degrading enzymes. (b) Mixing chaotropic substance and silica beads into the starting material. Lysate of starting material of (a) is mixed with silica beads and sufficiently large amounts of chaotropic substance. According to the chaotropic effect, released nucleic acids will be bound to the silica beads almost instantaneously. In this way, silica-nucleic acid complexes are formed. The reasons why nucleic acids and silica form bonds will be described in the following section (Basic principles). (c) Washing silica beads Silica beads of (b) are washed several times to remove contaminants. Process of washing of the silica-nucleic acid complexes (silica beads) typically consists of following steps, Collecting silica beads from the liquid by for example Tajima pipette (see Fig. 1,2) or Pellet-down (by rapid sedimentation and disposal of the supernatant ) Mixing silica beads into the chaotropic salt-containing washing buffer using, e. g., a vortex mixer. Collecting redispersed silica beads from above mentioned washing buffer again. Further washing successively with an alcohol-water solution and then with acetone. Beads will preferably be dried. (d) Separating the bonded nucleic acids Pure nucleic acids are eluted into buffer by decreasing the concentration of chaotropic substance. Nucleic acids present in the washed (and preferably dried) silica-nucleic acid complexes is eluted into chosen elution buffer such as TE buffer, aqua bidest, and so on. The selection of the elution buffer is co-determined by the contemplated use of the isolated nucleic acid. In this way, pure nucleic acids are isolated from the starting material. By altering the experimental conditions, especially the composition of reagents (chaotropic substance, wash buffer, etc) more specific isolation can be achieved. For example, some compositions of reagents are suitable for obtaining long double-stranded DNA or short single-stranded RNA. A wide variety of starting biological material are available, including whole blood, blood serum, buffy coat, urine, feces, cerebrospinal fluid, sperm, saliva, tissues, cell cultures, food products, or vaccines. Optimization of procedure is required to maximize yield of nucleic acids from different starting materials or different types of nucleic acids (eg long/short, DNA/RNA, linear/circular, double-stranded/single-stranded). Today, the assay characterized by using silica coated magnetic beads seems to be the most common. Therefore, in this article, "silica beads" are intended to mean silica coated magnetic beads unless stated otherwise. Magnetic beads Various magnetic particles (magnetic carrier) coated with silica are often used as silica coated beads Maghemite particle (γ-Fe2O3) and magnetite particle (Fe3O4), as well as an intermediate iron oxide particle thereof, are most suitable as magnetic carriers. Generally, the quality of the magnetic beads is characterized by following parameters: saturation magnetization (~10-80 A m2/kg (emu/g):Superparamagnetic), coercive force (~ 0.80-15.92 kA/m), size diameter (~ 0.1-0.5 μm), mass of each particle (~ 2.7 ng), ease of collection (to be mentioned later), capture ability (to be mentioned later), Sedimentation rate (~4% in 30 min), Area ratio (> 100 m2/g), Effective density (~ 2.5 g/cm3), and Particle counts (~ 1 x 108 particles/mg). Here, "ease of collection" is defined and compared by"magnetic beads are collected by not less than X wt % (~90wt %) within T seconds(~ 3 seconds) in the presence of a magnetic field of Y gauss (~3000 gauss) when it is dispersed in an amount of at least Z mg (~20 mg) in W mL (~1 mL) of an aqueous solution of a sample containing a biological substance" while capture ability are defined and compared by"binding with at least A μg (~0.4μg) of the biological substance per B mg (~1 mg) thereof when it is dispersed in an amount of at least Z mg (~20 mg) in W mL (~1 mL) of an aqueous solution of a sample containing a biological substance". Basic principles The principle of this method is based on the nucleic acid-binding properties of silica particles or diatoms in the presence of a chaotropic agent, which follows the chaotropic effect. Put simply, the chaotropic effect is where a chaotropic anion in an aqueous solution disturbs the structure of water, and weakens the hydrophobic interaction. In a broad sense, "chaotropic agent" stands for any substance capable of altering the secondary, tertiary and/or quaternary structure of proteins and nucleic acids, but leaving at least the primary structure intact. An aqueous solution of chaotropic salt is a chaotropic agent. Chaotropic anions increase the entropy of the system by interfering with intermolecular interactions mediated by non-covalent forces such as hydrogen bonds, van der Waals forces, and hydrophobic effects. Examples thereof are aqueous solution of: thiocyanate ion, iodine ion, perchlorate ion, nitrate ion, bromine ion, chlorine ion, acetate ion, fluorine ion, and sulfate ion, or mutual combinations therewith. According to the original Boom method, the chaotropic guanidinium salt employed is preferably guanidinium thiocyanate (GuSCN). According to the chaotropic effect, in the presence of the chaotropic agent, hydration water of nucleic acids are taken from the phosphodiester bond of the phosphate group of the backbone of a nucleic acid. Thus, the phosphate group becomes "exposed" and hydrophobic interaction between silica and exposed phosphate group are formed. Automated instruments Tajima pipette Nucleic acid extraction apparatus based on the Tajima pipette (see Fig. 2) are one of the most widespread instruments to perform the Boom method. The Tajima pipette was invented by Hideji Tajima, founder and president of Precision System Sciences (PSS) Inc., a Japanese manufacturer of precision and measuring instruments. Tajima pipette is a Core Technology of PSS Inc. PSS Inc. provides OEM product based on this technology (for example MagNA Pure(R) ) for several leading reagent manufacturers such as Hoffmann-La Roche, Life Technologies, ... and so on. After the Tajima et al. patent was filed, similar patent applications have also been filed by other parties. The Tajima pipette performs magnetic particle control method and procedure, which can separate magnetic particles combined with a target substance from the liquid by magnetic force and suspend them in a liquid. Configurations The pipette itself is an apparatus comprising following members (see Fig. 2). pipette tip configured to be able to access and aspirate/discharge liquid from/into each of vessels, having a front end portion, a reservoir portion, a liquid passage connecting the front end portion and the reservoir portion, a separation region in the liquid passage subjected to an action of a magnetic field, and a mechanism for applying a negative or positive pressure to the interior of the pipette portion to draw or discharge a magnetic substance suspended liquid into or from the pipette portion magnetic field Source arranged on the outside of and adjacent to pipette tip; and magnetic field source driving device for driving the magnetic field source to apply or remove a magnetic field to or from the separation region from outside the liquid passage. When the magnet is brought close to the pipette tip, a magnetic field is applied; when retracted away from the pipette tip, that magnetic field is removed. A nucleic acid extraction apparatus incorporating Tajima pipettes typically consists of: Above mentioned Tajima pipette, Plurality of tubes. Plurality of tube holder for above mentioned tubes, Transport mean to transport Tajima pipette among that plurality of tubes (tubes are supported by tube holder), and Control device for controlling abovementioned devices. Motions (a) Capturing of the magnetic beads. During this suction process, when the magnetic field are applied to the separation region of piper tip, from outside of pipette tip, by the magnet arranged on the outside of the pipette tip, as liquid containing magnetic beads passes through a separation region of the pipette tip, the magnetic particles are attracted to and arrested to the inner wall of tile separation region of pipette tip. Subsequently, when that solution are discharged under the conditions of has been kept the magnetic field, magnetic particles only are left in the inside of pipette tip. In this way magnetic particles are separated from liquid. In accordance with Tajima, the preferable suction height of the mixture liquid is such that the bottom level of the liquid is higher than the lower end of the separation region of the liquid passage (That means bottom level of the liquid is higher than the lower end of the magnet.), when all the mixture liquid is drawn up, so as to ensure that the aspirated magnetic particles can be completely arrested. At this time, because the magnetic particles are wet, they stay attached to the inner surface of the separation region of the liquid passage of the pipette tip. If the pipette tip P is moved or transported, the magnetic particles will not come off easily. (b) Re-suspension of the captured magnetic beads. After the magnetic particles are arrested by above mentioned manner (a), so the mixture liquid removed of the magnetic particles is discharged into the liquid accommodating portion (Vessel) and drained out, with only the magnetic particles remaining in the pipette tip, we can do the re-suspension process. Re-suspension of the captured magnetic beads are in detail, consists of the following steps. Of cause, we consider that, the state in which that magnetic material has been captured by above mention way. Aspirate liquid such as washing buffer into the tip Quit the application of a magnetic field By "Quit the application of a magnetic field" the magnetic particles are suspended in the liquid. Discharging the liquid (such as washing buffer) from pipette tip to vessel (in the condition of magnetic force generated by the magnet body is cut off.). Operations An example of the operations of the nucleic acid extraction apparatus which incorporates Tajima pipette are typically as shown in Fig. 1. Other methods Examples of other type of method of the magnetic particle capturing device are as follows. Pen type capture Tube type capture See also Chaotropic agent DNA extraction DNA separation by silica adsorption Ethanol precipitation Minicolumn purification Nucleic acid methods Phenol-chloroform extraction RNA extraction Notes References Molecular biology Laboratory techniques DNA Polymerase chain reaction
Boom method
[ "Chemistry", "Biology" ]
2,761
[ "Biochemistry methods", "Genetics techniques", "Polymerase chain reaction", "nan", "Molecular biology", "Biochemistry" ]
38,736,538
https://en.wikipedia.org/wiki/Systems%20pharmacology
Systems pharmacology is the application of systems biology principles to the field of pharmacology. It seeks to understand how drugs affect the human body as a single complex biological system. Instead of considering the effect of a drug to be the result of one specific drug-protein interaction, systems pharmacology considers the effect of a drug to be the outcome of the network of interactions a drug may have. In 1992, an article on systems medicine and pharmacology was published in China. Networks of interaction may include chemical-protein, protein–protein, genetic, signalling and physiological (at cellular, tissue, organ and whole body levels). Systems pharmacology uses bioinformatics and statistics techniques to integrate and interpret these networks. Systems pharmacology can be applied to drug safety studies as a complement to pharmacoepidemiology. See also Quantitative Systems Pharmacology Drug interaction PhD programs PharMetrX: Pharmacometrics & Computational Disease Modelling (annual call for applications, July - Sept 15th) References External links Quantitative Systems Pharmacology white paper Systems Pharmacology at Harvard What is (Quantitative) Systems Pharmacology? by John Russell Pharmacology
Systems pharmacology
[ "Chemistry" ]
252
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry", "Medicinal chemistry stubs" ]
24,608,545
https://en.wikipedia.org/wiki/Plateau%20principle
The plateau principle is a mathematical model or scientific law originally developed to explain the time course of drug action (pharmacokinetics). The principle has wide applicability in pharmacology, physiology, nutrition, biochemistry, and system dynamics. It applies whenever a drug or nutrient is infused or ingested at a relatively constant rate and when a constant fraction is eliminated during each time interval. Under these conditions, any change in the rate of infusion leads to an exponential increase or decrease until a new level is achieved. This behavior is also called an approach to steady state because rather than causing an indefinite increase or decrease, a natural balance is achieved when the rate of infusion or production is balanced by the rate of loss. An especially important use of the plateau principle is to study the renewal of tissue constituents in the human and animal body. In adults, daily synthesis of tissue constituents is nearly constant, and most constituents are removed with a first-order reaction rate. Applicability of the plateau principle was recognized during radioactive tracer studies of protein turnover in the 1940s by Rudolph Schoenheimer and David Rittenberg. Unlike the case with drugs, the initial amount of tissue or tissue protein is not zero because daily synthesis offsets daily elimination. In this case, the model is also said to approach a steady state with exponential or logarithmic kinetics. Constituents that change in this manner are said to have a biological half-life. A practical application of the plateau principle is that most people have experienced "plateauing" during regimens for weight management or training for sports. After a few weeks of progress, one seems unable to continue gaining in ability or losing weight. This outcome results from the same underlying quantitative model. This entry will describe the popular concepts as well as development of the plateau principle as a scientific, mathematical model. In the sciences, the broadest application of the plateau principle is creating realistic time signatures for change in kinetic models (see Mathematical model). One example of this principle is the long time required to effectively change human body composition. Theoretical studies have shown that many months of consistent physical training and food restriction are needed to bring about permanent weight stability in people who were previously overweight. The plateau principle in pharmacokinetics Most drugs are eliminated from the blood plasma with first-order kinetics. For this reason, when a drug is introduced into the body at a constant rate by intravenous therapy, it approaches a new steady concentration in the blood at a rate defined by its half-life. Similarly, when the intravenous infusion is ended, the drug concentration decreases exponentially and reaches an undetectable level after 5–6 half-lives have passed. If the same drug is administered as a bolus (medicine) with a single injection, peak concentration is achieved almost immediately and then the concentration declines exponentially. Most drugs are taken by mouth. In this case, the assumption of constant infusion is only approximated as doses are repeated over the course of several days. The plateau principle still applies but more complex models are required to account for the route of administration. Equations for the approach to steady state Derivation of equations that describe the time course of change for a system with zero-order input and first-order elimination are presented in the articles Exponential decay and Biological half-life, and in scientific literature. Ct is concentration after time t C0 is the initial concentration (t = 0) ke is the elimination rate constant The relationship between the elimination rate constant and half-life is given by the following equation: Because ln 2 equals 0.693, the half-life is readily calculated from the elimination rate constant. Half-life has units of time, and the elimination rate constant has units of 1/time, e.g., per hour or per day. An equation can be used to forecast the concentration of a compound at any future time when the fractional degration rate and steady state concentration are known: Css is concentration after the steady state has been achieved. The exponential function in parentheses corresponds to the fraction of total change that has been achieved as time passes and the difference between Css and C0 equals the total amount of change. Finally, at steady state, the concentration is expected to equal the rate of synthesis, production or infusion divided by the first-order elimination constant. ks is the rate of synthesis or infusion Although these equations were derived to assist with predicting the time course of drug action, the same equation can be used for any substance or quantity that is being produced at a measurable rate and degraded with first-order kinetics. Because the equation applies in many instances of mass balance, it has very broad applicability in addition to pharmacokinetics. The most important inference derived from the steady state equation and the equation for fractional change over time is that the elimination rate constant (ke) or the sum of rate constants that apply in a model determine the time course for change in mass when a system is perturbed (either by changing the rate of inflow or production, or by changing the elimination rate(s)). Estimating values for kinetic rate parameters When experimental data are available, the normal procedure for estimating rate parameters such as ke and Css is to minimize the sum of squares of differences between observed data and values predicted based on initial estimates of the rate constant and steady state value. This can be done using any software package that contains a curve fitting routine. An example of this methodology implemented with spreadsheet software has been reported. The same article reports a method that requires only 3 equally spaced data points to obtain estimates for kinetic parameters. Spreadsheets that compare these methods are available. The plateau principle in nutrition Dr. Wilbur O. Atwater, who developed the first database of food composition in the United States, recognized that the response to excessive or insufficient nutrient intake included an adjustment in efficiency that would result in a plateau. He observed: "It has been found by numerous experiments that when the nutrients are fed in large excess, the body may continue for a time to store away part of the extra material, but after it has accumulated a certain amount, it refuses to take on more, and the daily consumption equals the supply even when this involves great waste." In general, no essential nutrient is produced in the body. Nutrient kinetics therefore follow the plateau principle with the distinction that most are ingested by mouth and the body must contain an amount adequate for health. The plateau principle is important in determining how much time is needed to produce a deficiency when intake is insufficient. Because of this, pharmacokinetic considerations should be part of the information needed to set a dietary reference intake for essential nutrients. Vitamin C The blood plasma concentration of vitamin C or ascorbic acid as a function of dose attains a plateau with a half-life of about 2 weeks. Bioavailability of vitamin C is highest at dosages below 200 mg per day. Above 500 mg, nearly all of excess vitamin C is excreted through urine. Vitamin D Vitamin D metabolism is complex because the provitamin can be formed in the skin by ultraviolet irradiation or obtained from the diet. Once hydroxylated, the vitamin has a half-life of about 2 months. Various studies have suggested that current intakes are inadequate for optimum bone health and much current research is aimed at determining recommendations for obtaining adequate circulating vitamin D3 and calcium while also minimizing potential toxicity. Phytochemicals in foods and beverages Many healthful qualities of foods and beverages may be related to the content of phytochemicals (see List of phytochemicals in food). Prime examples are flavonoids found in green tea, berries, cocoa, and spices as well as in the skins and seeds of apples, onions, and grapes. Investigations into healthful benefits of phytochemicals follow exactly the same principles of pharmacokinetics that are required to study drug therapy. The initial concentration of any non-nutritive phytochemical in the blood plasma is zero unless a person has recently ingested a food or beverage. For example, as increasing amounts of green tea extract are consumed, a graded increase in plasma catechin can be measured, and the major compound is eliminated with a half-life of about 5 hours. Other considerations that must be evaluated include whether the ingested compound interacts favorably or unfavorably with other nutrients or drugs, and whether there is evidence for a threshold or toxicity at higher levels of intake. Transitions in body composition Plateaus during dieting and weight loss It is especially common for people who are trying to lose weight to experience plateaus after several weeks of successful weight reduction. The plateau principle suggests that this leveling off is a sign of success. Basically, as one loses weight, less food energy is required to maintain the resting metabolic rate, which makes the initial regimen less effective. The idea of weight plateaus has been discussed for subjects who are participating in a calorie restriction experiment Food energy is expended largely through work done against gravity (see Joule), so weight reduction lessens the effectiveness of a given workout. In addition, a trained person has greater skill and therefore greater efficiency during a workout. Remedies include increasing the workout intensity or length and reducing portion sizes of meals more than may have been done initially. The fact that weight loss and dieting reduce the metabolic rate is supported by research. In one study, heat production was reduced 30% in obese men after a weight loss program, and this led to resistance to further lose body weight. Whether body mass increases or decreases, adjustments in the thermic effect of food, resting energy expenditure, and non-resting energy expenditure all oppose further change. Plateaus during strength training Any athlete who has trained for a sport has probably experienced plateaus, and this has given rise to various strategies to continue improving. Voluntary skeletal muscle is in balance between the amount of muscle synthesized or renewed each day and the amount that is degraded. Muscle fibers respond to repetition and load, and increased training causes the quantity of exercised muscle fiber to increase exponentially (simply meaning that the greatest gains are seen during the first weeks of training). Successful training produces hypertrophy of muscle fibers as an adaptation to the training regimen. In order to make further gains, greater workout intensity is required with heavier loads and more repetitions, although improvement in skill can contribute to gains in ability. When a bodily constituent adjusts exponentially over time, it usually attains a new stable level as a result of the plateau principle. The new level may be higher than the initial level (hypertrophy) in the case of strength training or lower in the case of dieting or disuse atrophy. This adjustment contributes to homeostasis but does not require feedback regulation. Gradual, asymptotic approach to a new balance between synthesis and degradation produces a stable level. Because of this, the plateau principle is sometimes called the stability principle. Mathematically, the result is linear dynamics despite the fact that most biological processes are non-linear (see Nonlinear system) if considered over a very broad range of inputs. Changes in body composition when food is restricted Data from the Minnesota Starvation Experiment by Ancel Keys and others demonstrate that during food restriction, total body mass, fat mass and lean body mass follow an exponential approach to a new steady state. The observation that body mass changes exponentially during partial or complete starvation seems to be a general feature of adaptation to energy restriction. The plateau principle in biochemistry Each cell produces thousands of different kinds of protein and enzymes. One of the key methods of cellular regulation is to change the rate of transcription of messenger RNA, which gives rise to a change in the rate of synthesis for the protein that the messenger RNA encodes. The plateau principle explains why the concentration of different enzymes increases at unique rates in response to a single hormone. Because each enzyme is degraded with at a unique rate (each has a different half-life), the rate of change differs even when the same stimulus is applied. This principle has been demonstrated for the response of liver enzymes that degrade amino acids to cortisone, which is a catabolic hormone. The method of approach to steady state has also been used to analyze the change in messenger RNA levels when synthesis or degradation changes, and a model has also been reported in which the plateau principle is used to connect the change in messenger RNA synthesis to the expected change in protein synthesis and concentration as a function of time. The plateau principle in physiology Excessive gain in body weight contributes to the metabolic syndrome, which may include elevated fasting blood sugar (or glucose), resistance to the action of insulin, elevated low-density lipoprotein (LDL cholesterol) or decreased high-density lipoprotein (HDL cholesterol), and elevated blood pressure. Obesity was designated as a disease in 2013 by the American Medical Association. It is defined as a chronic, relapsing, multi-factorial, neurobehavioral disease, wherein an increase in body fat promotes adipose tissue dysfunction and abnormal fat mass physical forces, resulting in adverse metabolic, biomechanical, and psychosocial health consequences. Because body mass, fat mass and fat free mass all change exponentially during weight reduction, it is a reasonable hypothesis to expect that symptoms of metabolic syndrome will also adjust exponentially towards normal values. The plateau principle in compartmental modeling Scientists have evaluated turnover of bodily constituents using radioactive tracers and stable isotope tracers. If given orally, the tracers are absorbed and move into the blood plasma, and are then distributed throughout the bodily tissues. In such studies, a multi-compartment model is required to analyze turnover by isotopic labeling. The isotopic marker is called a tracer and the material being analyzed is the tracee. In studies with humans, blood plasma is the only tissue that can be easily sampled. A common procedure is to analyze the dynamics by assuming that changes can be attributed to a sum of exponentials. A single mathematical compartment is usually assumed to follow first-order kinetics in accord with the plateau principle. There are many examples of this kind of analysis in nutrition, for example, in the study of metabolism of zinc, and carotenoids. The commonest assumption in compartmental modeling is that material in a homogeneous compartment behaves exponentially. However, this assumption is sometimes modified to include a saturable response that follows Michaelis–Menten kinetics or a related model called a Hill equation. When the material in question is present at a concentration near the KM, it often behaves with pseudo first-order kinetics (see Rate equation) and the plateau principle applies despite the fact that the model is non-linear. The plateau principle in system dynamics Compartmental modeling in biomedical sciences primarily originated from the need to study metabolism by using tracers. In contrast, System dynamics originated as a simple method of developing mathematical models by Jay Wright Forrester and colleagues. System dynamics represents a compartment or pool as a stock and movement among compartments as flows. In general, the rate of flow depends on the amount of material in the stock to which it is connected. It is common to represent this dependence as a constant proportion (or first-order) using a connector element in the model. System dynamics is one application of the field of control theory. In the biomedical field, one of the strongest advocates for computer-based analysis of physiological problems was Dr. Arthur Guyton. For example, system dynamics has been used to analyze the problem of body weight regulation. Similar methods have been used to study the spread of epidemics (see Compartmental models in epidemiology). Software that solves systems of equations required for compartmental modeling and system dynamics makes use of finite difference methods to represent a set of ordinary differential equations. An expert appraisal of the different types of dynamic behavior that can be developed by application of the plateau principle to the field of system dynamics has been published. References External links Plateau Principle book with Excel tutorials Plateau Principle for weight loss and strength training, with Excel tutorials Mathematical and theoretical biology Pharmacokinetics
Plateau principle
[ "Chemistry", "Mathematics" ]
3,302
[ "Pharmacology", "Mathematical and theoretical biology", "Pharmacokinetics", "Applied mathematics" ]
24,616,522
https://en.wikipedia.org/wiki/Polonium%20hydride
Polonium hydride (also known as polonium dihydride, hydrogen polonide, or polane) is a chemical compound with the formula PoH2. It is a liquid at room temperature, the second hydrogen chalcogenide with this property after water. It is very unstable chemically and tends to decompose into elemental polonium and hydrogen. It is a volatile and very labile compound, from which many polonides can be derived. Additionally, it is radioactive. Preparation Polonium hydride cannot be produced by direct reaction from the elements upon heating. Other unsuccessful routes to synthesis include the reaction of polonium tetrachloride (PoCl4) with lithium aluminium hydride (LiAlH4), which only produces elemental polonium, and the reaction of hydrochloric acid with magnesium polonide (MgPo). The fact that these synthesis routes do not work may be caused by the radiolysis of polonium hydride upon formation. Trace quantities of polonium hydride may be prepared by reacting hydrochloric acid with polonium-plated magnesium foil. In addition, the diffusion of trace quantities of polonium in palladium or platinum that is saturated with hydrogen (see palladium hydride) may be due to the formation and migration of polonium hydride. Properties Polonium hydride is a more covalent compound than most metal hydrides because polonium straddles the border between metals and metalloids and has some nonmetallic properties. It is intermediate between a hydrogen halide like hydrogen chloride and a metal hydride like stannane. It should have properties similar to that of hydrogen selenide and hydrogen telluride, other borderline hydrides. It is expected to be an endothermic compound, like the lighter hydrogen telluride and hydrogen selenide, and therefore would decompose into its constituent elements, releasing heat in the process. The amount of heat given off in the decomposition of polonium hydride is over 100 kJ/mol, the largest of all the hydrogen chalcogenides. It is predicted that, like the other hydrogen chalcogenides, polonium may form two types of salts: polonide (containing the Po2− anion) and one from polonium hydride (containing –PoH, which would be the polonium analogue of thiol, selenol and tellurol). However, no salts from polonium hydride are known. An example of a polonide is lead polonide (PbPo), which occurs naturally as lead is formed in the alpha decay of polonium. Polonium hydride is difficult to work with due to the extreme radioactivity of polonium and its compounds and has only been prepared in very dilute tracer quantities. As a result, its physical properties are not definitely known. It is also unknown if polonium hydride forms an acidic solution in water like its lighter homologues, or if it behaves more like a metal hydride (see also hydrogen astatide). References Polonium compounds Hydrogen compounds Metal hydrides Triatomic_molecules
Polonium hydride
[ "Physics", "Chemistry" ]
670
[ "Inorganic compounds", "Molecules", "Reducing agents", "Triatomic molecules", "Metal hydrides", "Matter" ]
5,307,853
https://en.wikipedia.org/wiki/QMR%20effect
Quadratic magnetic rotation (also known as QMR or QMR effect) is a type of magneto-optic effect, discovered in the mid 1980s by a team of Ukrainian physicists. QMR, like the Faraday effect, establishes a relationship between the magnetic field and rotation of polarization of the plane of linearly polarized light. In contrast to the Faraday effect, QMR originates from the quadratic proportionality between the angle of the rotation of the plane of polarization and the strength of the magnetic field. Mostly QMR can be observed in the transverse geometry when the vector of the magnetic field strength is perpendicular to the direction of light propagation. The first evidence of QMR effect was obtained in the antiferromagnetic crystal of cobalt fluoride in 1985. Considerations of the symmetry of the media, light and axial vector of the magnetic field forbid QMR in non-magnetic or magnetically disordered media. Onsager's reciprocal relations generalized for magnetically ordered media eliminate symmetry restrictions for QMR in the media which have lost the center of anti-inversion as an operation of symmetry at an ordering of its magnetic subsystem. Despite the fact that some crystal groups of symmetry are devoid of the center of anti-inversion, they also don’t have QMR because of action of other operators of symmetry. They are eleven groups without the center of anti-inversion 432, 43'm, m3m, 422, 4mm, 4'2m, 4/mmm, 622, 6mm, 6'm2 and 6/mmm. Accordingly, the rest of groups of crystal symmetry where QMR can be observed constitutes 27 antiferromagnetic and 31 pyromagnetic crystal classes. QMR is described by fourth-order c-tensor which is antisymmetrical as to the first two indices. See also Faraday effect Magneto-optic effect References Magneto-optic effects
QMR effect
[ "Physics", "Chemistry", "Materials_science" ]
406
[ "Optical phenomena", "Physical phenomena", "Electric and magnetic fields in matter", "Magneto-optic effects" ]
5,309,182
https://en.wikipedia.org/wiki/Strontium%20bromate
Strontium bromate is a rarely considered chemical in the laboratory or in industries. It is, however, mentioned in the book Uncle Tungsten: Memories of a Chemical Boyhood by Oliver Sacks. There it is said that this salt glows when crystallized from a saturated aqueous solution. Chemically this salt is soluble in water, and is a moderately strong oxidizing agent. Strontium bromate is toxic if ingested and irritates the skin and respiratory tract if come into contact with or inhaled, respectively. Its chemical formula is Sr(BrO3)2. References Strontium compounds Bromates Inorganic compounds Oxidizing agents
Strontium bromate
[ "Chemistry" ]
134
[ "Redox", "Inorganic compounds", "Oxidizing agents", "Inorganic compound stubs", "Bromates" ]
5,309,463
https://en.wikipedia.org/wiki/Camber%20beam
In building, a camber beam is a piece of timber cut archwise, and steel bent or rolled, with an obtuse angle in the middle, commonly used in platforms as church leads, and other occasions where long and strong beams are required. The camber curve is ideally a parabola, but practically a circle segment as even with modern materials and calculations, cambers are imprecise. A camber beam is much stronger than another of the same size, since being laid with the hollow side downwards, as they usually are, they form a kind of supporting arch. References External links Architectural elements Building
Camber beam
[ "Technology", "Engineering" ]
125
[ "Building", "Building engineering", "Construction", "Architectural elements", "Components", "Architecture" ]
5,310,739
https://en.wikipedia.org/wiki/Dead-end%20elimination
The dead-end elimination algorithm (DEE) is a method for minimizing a function over a discrete set of independent variables. The basic idea is to identify "dead ends", i.e., combinations of variables that are not necessary to define a global minimum because there is always a way of replacing such combination by a better or equivalent one. Then we can refrain from searching such combinations further. Hence, dead-end elimination is a mirror image of dynamic programming, in which "good" combinations are identified and explored further. Although the method itself is general, it has been developed and applied mainly to the problems of predicting and designing the structures of proteins (and in this wise was cited in the Scientific Background to the 2024 Nobel Prize in Chemistry). It closely related to the notion of dominance in optimization also known as substitutability in a Constraint Satisfaction Problem. The original description and proof of the dead-end elimination theorem can be found in . Basic requirements An effective DEE implementation requires four pieces of information: A well-defined finite set of discrete independent variables A precomputed numerical value (considered the "energy") associated with each element in the set of variables (and possibly with their pairs, triples, etc.) A criterion or criteria for determining when an element is a "dead end", that is, when it cannot possibly be a member of the solution set An objective function (considered the "energy function") to be minimized Note that the criteria can easily be reversed to identify the maximum of a given function as well. Applications to protein structure prediction Dead-end elimination has been used effectively to predict the structure of side chains on a given protein backbone structure by minimizing an energy function . The dihedral angle search space of the side chains is restricted to a discrete set of rotamers for each amino acid position in the protein (which is, obviously, of fixed length). The original DEE description included criteria for the elimination of single rotamers and of rotamer pairs, although this can be expanded. In the following discussion, let be the length of the protein and let represent the rotamer of the side chain. Since atoms in proteins are assumed to interact only by two-body potentials, the energy may be written Where represents the "self-energy" of a particular rotamer , and represents the "pair energy" of the rotamers . Also note that (that is, the pair energy between a rotamer and itself) is taken to be zero, and thus does not affect the summations. This notation simplifies the description of the pairs criterion below. Singles elimination criterion If a particular rotamer of sidechain cannot possibly give a better energy than another rotamer of the same sidechain, then rotamer A can be eliminated from further consideration, which reduces the search space. Mathematically, this condition is expressed by the inequality where is the minimum (best) energy possible between rotamer of sidechain and any rotamer X of side chain . Similarly, is the maximum (worst) energy possible between rotamer of sidechain and any rotamer X of side chain . Pairs elimination criterion The pairs criterion is more difficult to describe and to implement, but it adds significant eliminating power. For brevity, we define the shorthand variable that is the intrinsic energy of a pair of rotamers and at positions and , respectively A given pair of rotamers and at positions and , respectively, cannot both be in the final solution (although one or the other may be) if there is another pair and that always gives a better energy. Expressed mathematically, where , and . Energy matrices For large , the matrices of precomputed energies can become costly to store. Let be the number of amino acid positions, as above, and let be the number of rotamers at each position (this is usually, but not necessarily, constant over all positions). Each self-energy matrix for a given position requires entries, so the total number of self-energies to store is . Each pair energy matrix between two positions and , for discrete rotamers at each position, requires a matrix. This makes the total number of entries in an unreduced pair matrix . This can be trimmed somewhat, at the cost of additional complexity in implementation, because pair energies are symmetrical and the pair energy between a rotamer and itself is zero. Implementation and efficiency The above two criteria are normally applied iteratively until convergence, defined as the point at which no more rotamers or pairs can be eliminated. Since this is normally a reduction in the sample space by many orders of magnitude, simple enumeration will suffice to determine the minimum within this pared-down set. Given this model, it is clear that the DEE algorithm is guaranteed to find the optimal solution; that is, it is a global optimization process. The single-rotamer search scales quadratically in time with total number of rotamers. The pair search scales cubically and is the slowest part of the algorithm (aside from energy calculations). This is a dramatic improvement over the brute-force enumeration which scales as . A large-scale benchmark of DEE compared with alternative methods of protein structure prediction and design finds that DEE reliably converges to the optimal solution for protein lengths for which it runs in a reasonable amount of time. It significantly outperforms the alternatives under consideration, which involved techniques derived from mean field theory, genetic algorithms, and the Monte Carlo method. However, the other algorithms are appreciably faster than DEE and thus can be applied to larger and more complex problems; their relative accuracy can be extrapolated from a comparison to the DEE solution within the scope of problems accessible to DEE. Protein design The preceding discussion implicitly assumed that the rotamers are all different orientations of the same amino acid side chain. That is, the sequence of the protein was assumed to be fixed. It is also possible to allow multiple side chains to "compete" over a position by including both types of side chains in the set of rotamers for that position. This allows a novel sequence to be designed onto a given protein backbone. A short zinc finger protein fold has been redesigned this way. However, this greatly increases the number of rotamers per position and still requires a fixed protein length. Generalizations More powerful and more general criteria have been introduced that improve both the efficiency and the eliminating power of the method for both prediction and design applications. One example is a refinement of the singles elimination criterion known as the Goldstein criterion, which arises from fairly straightforward algebraic manipulation before applying the minimization: Thus rotamer can be eliminated if any alternative rotamer from the set at contributes less to the total energy than . This is an improvement over the original criterion, which requires comparison of the best possible (that is, the smallest) energy contribution from with the worst possible contribution from an alternative rotamer. An extended discussion of elaborate DEE criteria and a benchmark of their relative performance can be found in . References Desmet J, de Maeyer M, Hazes B, Lasters I. (1992). The dead-end elimination theorem and its use in protein side-chain positioning. Nature, 356, 539-542. . Voigt CA, Gordon DB, Mayo SL. (2000). Trading accuracy for speed: A quantitative comparison of search algorithms in protein sequence design. J Mol Biol 299(3):789-803. Dahiyat BI, Mayo SL. (1997). De novo protein design: fully automated sequence selection. Science 278(5335):82-7. Goldstein RF. (1994). Efficient rotamer elimination applied to protein side-chains and related spin glasses. Biophys J 66(5):1335-40. Pierce NA, Spriet JA, Desmet J, Mayo SL. (2000). Conformational splitting: a more powerful criterion for dead-end elimination. J Comput Chem 21: 999-1009. Mathematical optimization Protein methods
Dead-end elimination
[ "Chemistry", "Mathematics", "Biology" ]
1,667
[ "Biochemistry methods", "Mathematical analysis", "Protein methods", "Protein biochemistry", "Mathematical optimization" ]
5,310,787
https://en.wikipedia.org/wiki/Efalizumab
Efalizumab (brand name Raptiva, Genentech, Merck Serono) is a formerly available medication designed to treat autoimmune diseases, originally marketed to treat psoriasis. As implied by the suffix -zumab, it is a recombinant humanized monoclonal antibody administered once weekly by subcutaneous injection. Efalizumab binds to the CD11a subunit of lymphocyte function-associated antigen 1 and acts as an immunosuppressant by inhibiting lymphocyte activation and cell migration out of blood vessels into tissues. Efalizumab was associated with fatal brain infections and was withdrawn from the market in 2009. Known side effects include bacterial sepsis, viral meningitis, invasive fungal disease and progressive multifocal leukoencephalopathy (PML), a brain infection caused by reactivation of latent JC virus infection. Four cases of PML were reported in plaque psoriasis patients, an incidence of approximately one in 500 treated patients. Due to the risk of PML, the European Medicines Agency (EMA) and the Food and Drug Administration (FDA) recommend suspension from the market in the European Union and the United States, respectively. In April 2009, Genentech Inc. announced a phased voluntary withdrawal of Raptiva from the U.S. market. References Recombinant proteins Immunosuppressants Withdrawn drugs Drugs developed by Merck Drugs developed by Hoffmann-La Roche Drugs developed by Genentech
Efalizumab
[ "Chemistry", "Biology" ]
322
[ "Biotechnology products", "Recombinant proteins", "Drug safety", "Withdrawn drugs" ]
5,311,145
https://en.wikipedia.org/wiki/Lebesgue%20spine
In mathematics, in the area of potential theory, a Lebesgue spine or Lebesgue thorn is a type of set used for discussing solutions to the Dirichlet problem and related problems of potential theory. The Lebesgue spine was introduced in 1912 by Henri Lebesgue to demonstrate that the Dirichlet problem does not always have a solution, particularly when the boundary has a sufficiently sharp edge protruding into the interior of the region. Definition A typical Lebesgue spine in , for is defined as follows The important features of this set are that it is connected and path-connected in the euclidean topology in and the origin is a limit point of the set, and yet the set is thin at the origin, as defined in the article Fine topology (potential theory). Observations The set is not closed in the euclidean topology since it does not contain the origin which is a limit point of , but the set is closed in the fine topology in . In comparison, it is not possible in to construct such a connected set which is thin at the origin. References J. L. Doob. Classical Potential Theory and Its Probabilistic Counterpart, Springer-Verlag, Berlin Heidelberg New York, . L. L. Helms (1975). Introduction to potential theory. R. E. Krieger . Potential theory
Lebesgue spine
[ "Mathematics" ]
267
[ "Mathematical analysis", "Functions and mappings", "Mathematical analysis stubs", "Mathematical objects", "Potential theory", "Mathematical relations" ]
5,312,299
https://en.wikipedia.org/wiki/Impedance%20cardiography
Impedance cardiography (ICG) is a non-invasive technology measuring total electrical conductivity of the thorax and its changes in time to process continuously a number of cardiodynamic parameters, such as stroke volume (SV), heart rate (HR), cardiac output (CO), ventricular ejection time (VET), pre-ejection period and used to detect the impedance changes caused by a high-frequency, low magnitude current flowing through the thorax between additional two pairs of electrodes located outside of the measured segment. The sensing electrodes also detect the ECG signal, which is used as a timing clock of the system. Introduction Impedance cardiography (ICG), also referred to as electrical impedance plethysmography (EIP) or Thoracic Electrical Bioimpedance (TEB) has been researched since the 1940s. NASA helped develop the technology in the 1960s. The use of impedance cardiography in psychophysiological research was pioneered by the publication of an article by Miller and Horvath in 1978. Subsequently, the recommendations of Miller and Horvath were confirmed by a standards group in 1990. A comprehensive list of references is available at ICG Publications. With ICG, the placement of four dual disposable sensors on the neck and chest are used to transmit and detect electrical and impedance changes in the thorax, which are used to measure and calculate cardiodynamic parameters. Process Four pairs of electrodes are placed at the neck and the diaphragm level, delineating the thorax High frequency, low magnitude current is transmitted through the chest in a direction parallel with the spine from the set of outside pairs Current seeks path of least resistance: the blood filled aorta (the systolic phase signal) and both vena cava superior and inferior (the diastolic phase signal, mostly related to respiration) The inside pairs, placed at the anatomic landmarks delineating thorax, sense the impedance signals and the ECG signal ICG measures the baseline impedance (resistance) to this current With each heartbeat, blood volume and velocity in the aorta change ICG measures the corresponding change in impedance and its timing ICG attributes the changes in impedance to (a) the volumetric expansion of the aorta (this is the main difference between ICG and electrical cardiometry) and (b) to the blood velocity-caused alignment of erythrocytes as a function of blood velocity ICG uses the baseline and changes in impedance to measure and calculate hemodynamic parameters Hemodynamics Hemodynamics is a subchapter of cardiovascular physiology, which is concerned with the forces generated by the heart and the resulting motion of blood through the cardiovascular system. These forces demonstrate themselves to the clinician as paired values of blood flow and blood pressure measured simultaneously at the output node of the left heart. Hemodynamics is a fluidic counterpart to the Ohm's law in electronics: pressure is equivalent to voltage, flow to current, vascular resistance to electrical resistance and myocardial work to power. The relationship between the instantaneous values of aortic blood pressure and blood flow through the aortic valve over one heartbeat interval and their mean values are depicted in Fig.1. Their instantaneous values may be used in research; in clinical practice, their mean values, MAP and SV, are adequate. Blood flow parameters Systemic (global) blood flow parameters are (a) the blood flow per heartbeat, the Stroke Volume, SV [ml/beat], and (b) the blood flow per minute, the Cardiac Output, CO [l/min]. There is clear relationship between these blood flow parameters: CO[l/min] = (SV[ml] × HR[bpm])/1000 {Eq.1} where HR is the Heart Rate frequency (beats per minute, bpm). Since the normal value of CO is proportional to body mass it has to perfuse, one "normal" value of SV and CO for all adults cannot exist. All blood flow parameters have to be indexed. The accepted convention is to index them by the Body Surface Area, BSA [m2], by DuBois & DuBois Formula, a function of height and weight: BSA[m2] = W0.425[kg] × H0.725[cm] × 0.007184 {Eq.2} The resulting indexed parameters are Stroke Index, SI (ml/beat/m2) defined as SI[ml/beat/m2] = SV[ml]/BSA[m2] {Eq.3} and Cardiac Index, CI (l/min/m2), defined as CI[l/min/m2] = CO[l/min]/BSA[m2] {Eq.4} These indexed blood flow parameters exhibit typical ranges: For the Stroke Index: 35 < SItypical < 65 ml/beat/m2; for the Cardiac Index: 2.8 < CItypical < 4.2 l/min/m2. Eq.1 for indexed parameters then changes to CI[l/min/m2] = (SI[ml/beat/m2] × HR[bpm])/1000 {Eq.1a} Oxygen transport The primary function of the cardiovascular system is transport of oxygen: blood is the vehicle, oxygen is the cargo. The task of the healthy cardiovascular system is to provide adequate perfusion to all organs and to maintain a dynamic equilibrium between oxygen demand and oxygen delivery. In a healthy person, the cardiovascular system always increases blood flow in response to increased oxygen demand. In a hemodynamically compromised person, when the system is unable to satisfy increased oxygen demand, the blood flow to organs lower on the oxygen delivery priority list is reduced and these organs may, eventually, fail. Digestive disorders, male impotence, tiredness, sleepwalking, environmental temperature intolerance, are classic examples of a low-flow-state, resulting in reduced blood flow. Modulators SI variability and MAP variability are accomplished through activity of hemodynamic modulators. The conventional cardiovascular physiology terms for the hemodynamic modulators are preload, contractility and afterload. They deal with (a) the inertial filling forces of blood return into the atrium (preload), which stretch the myocardial fibers, thus storing energy in them, (b) the force by which the heart muscle fibers shorten thus releasing the energy stored in them in order to expel part of blood in the ventricle into the vasculature (contractility), and (c) the forces the pump has to overcome in order to deliver a bolus of blood into the aorta per each contraction (afterload). The level of preload is currently assessed either from the PAOP (pulmonary artery occluded pressure) in a catheterized patient, or from EDI (end-diastolic index) by use of ultrasound. Contractility is not routinely assessed; quite often inotropy and contractility are interchanged as equal terms. Afterload is assessed from the SVRI value. Rather than using the terms preload, contractility and afterload, the preferential terminology and methodology in per-beat hemodynamics is to use the terms for actual hemodynamic modulating tools, which either the body utilizes or the clinician has in his toolbox to control the hemodynamic state: The preload and the Frank-Starling (mechanically)-induced level of contractility is modulated by variation of intravascular volume (volume expansion or volume reduction/diuresis). Pharmacological modulation of contractility is performed with cardioactive inotropic agents (positive or negative inotropes) being present in the blood stream and affecting the rate of contraction of myocardial fibers. The afterload is modulated by varying the caliber of sphincters at the input and output of each organ, thus the vascular resistance, with the vasoactive pharmacological agents (vasoconstrictors or vasodilators and/or ACE Inhibitors and/or ARBs)(ACE = Angiotensin-converting-enzyme; ARB = Angiotensin-receptor-blocker). Afterload also increases with increasing blood viscosity, however, with the exception of extremely hemodiluted or hemoconcentrated patients, this parameter is not routinely considered in clinical practice. With the exception of volume expansion, which can be accomplished only by physical means (intravenous or oral intake of fluids), all other hemodynamic modulating tools are pharmacological, cardioactive or vasoactive agents. The measurement of CI and its derivatives allow clinicians to make timely patient assessment, diagnosis, prognosis, and treatment decisions. It has been well established that both trained and untrained physicians alike are unable to estimate cardiac output through physical assessment alone. Invasive monitoring Clinical measurement of cardiac output has been available since the 1970s. However, this blood flow measurement is highly invasive, utilizing a flow-directed, thermodilution catheter (also known as the Swan-Ganz catheter), which represents significant risks to the patient. In addition, this technique is costly (several hundred dollars per procedure) and requires a skilled physician and a sterile environment for catheter insertion. As a result, it has been used only in very narrow strata (less than 2%) of critically ill and high-risk patients in whom the knowledge of blood flow and oxygen transport outweighed the risks of the method. In the United States, it is estimated that at least two million pulmonary artery catheter monitoring procedures are performed annually, most often in peri-operative cardiac and vascular surgical patients, decompensated heart failure, multi-organ failure, and trauma. Noninvasive monitoring In theory, a noninvasive way to monitor hemodynamics would provide exceptional clinical value because data similar to invasive hemodynamic monitoring methods could be obtained with much lower cost and no risk. While noninvasive hemodynamic monitoring can be used in patients who previously required an invasive procedure, the largest impact can be made in patients and care environments where invasive hemodynamic monitoring was neither possible nor worth the risk or cost. Because of its safety and low cost, the applicability of vital hemodynamic measurements could be extended to significantly more patients, including outpatients with chronic diseases. ICG has even been used in extreme conditions such as outer space and a Mt. Everest expedition. Heart failure, hypertension, pacemaker, and dyspnea patients are four conditions in which outpatient noninvasive hemodynamic monitoring can play an important role in the assessment, diagnosis, prognosis, and treatment. Some studies have shown ICG cardiac output is accurate, while other studies have shown it is inaccurate. Use of ICG has been shown to improve blood pressure control in resistant hypertension when used by both specialists and general practitioners. ICG has also been shown to predict worsening status in heart failure. ICG Parameters The electrical and impedance signals are processed to determine fiducial points, which are then utilized to measure and calculate hemodynamic parameters, such as cardiac output, stroke volume, systemic vascular resistance, thoracic fluid content, acceleration index, and systolic time ratio. References External links http://bomed.us/teb.html Diagnostic cardiology Impedance measurements Medical equipment Measuring instruments Electrophysiology
Impedance cardiography
[ "Physics", "Technology", "Engineering", "Biology" ]
2,413
[ "Physical quantities", "Measuring instruments", "Medical equipment", "Impedance measurements", "Electrical resistance and conductance", "Medical technology" ]
37,284,209
https://en.wikipedia.org/wiki/Modified%20compression%20field%20theory
The modified compression field theory (MCFT) is a general model for the load-deformation behaviour of two-dimensional cracked reinforced concrete subjected to shear. It models concrete considering concrete stresses in principal directions summed with reinforcing stresses assumed to be only axial. The concrete stress-strain behaviour was derived originally from Vecchio's tests and has since been confirmed with about 250 experiments performed on two large special purpose testing machines at the University of Toronto. Similar machines have been built in Japan and the United States, providing additional confirmation of the quality of the method's predictions. The most important assumption in the MCFT model is that the cracked concrete in reinforced concrete can be treated as a new material with empirically defined stress–strain behaviour. This behaviour can differ from the traditional stress–strain curve of a cylinder, for example. The strains used for these stress–strain relationships are average strains, that is, they lump together the combined effects of local strains at cracks, strains between cracks, bond-slip, and crack slip. The calculated stresses are also average stresses in that they implicitly include stresses between cracks, stresses at cracks, interface shear on cracks, and dowel action. For the use of these average stresses and strains to be a reasonable assumption, the distances used in determining the average behaviour must include a few cracks. History Frank J. Vecchio defined the original form of MCFT in 1982 from the testing of 30 reinforced concrete panels subjected to uniform strain states in a specially built tester. The theory of MCFT traces back through the compression field theory of 1978 to the Diagonal compression Field Theory of 1974. The definitive description of the MCFT is in the 1986 American Concrete Institute paper "The Modified Compression Field Theory for Reinforced Concrete Elements Subjected to Shear". References Reinforced concrete Structural engineering
Modified compression field theory
[ "Engineering" ]
363
[ "Structural engineering", "Civil engineering", "Construction" ]
42,997,120
https://en.wikipedia.org/wiki/Isentropic%20nozzle%20flow
In fluid mechanics, isentropic nozzle flow describes the movement of a fluid through a narrow opening without an increase in entropy (an isentropic process). Overview Whenever a gas is forced through a tube, the gaseous molecules are deflected by the tube's walls. If the speed of the gas is much less than the speed of sound, the gas density will remain constant and the velocity of the flow will increase. However, as the speed of the flow approaches the speed of sound, compressibility effects on the gas are to be considered. The density of the gas becomes position dependent. While considering flow through a tube, if the flow is very gradually compressed (i.e. area decreases) and then gradually expanded (i.e. area increases), the flow conditions are restored (i.e. return to its initial position). So, such a process is a reversible process. According to the second law of thermodynamics, whenever there is a reversible and adiabatic flow, constant value of entropy is maintained. Engineers classify this type of flow as an isentropic flow of fluids. Isentropic is the combination of the Greek word "iso" (which means - same) and entropy. When the change in flow variables is small and gradual, isentropic flows occur. The generation of sound waves is an isentropic process. A supersonic flow that is turned while there is an increase in flow area is also isentropic. Since there is an increase in area, therefore we call this an isentropic expansion. If a supersonic flow is turned abruptly and the flow area decreases, the flow is irreversible due to the generation of shock waves. The isentropic relations are no longer valid and the flow is governed by the oblique or normal shock relations. Set of Equations Below are nine equations commonly used when evaluating isentropic flow conditions. These assume the gas is calorically perfect; i.e. the ratio of specific heats is a constant across the temperature range. In typical cases the actual variation is only slight. Properties without a subscript are evaluated at the point of interest (this point may be chosen anywhere along the length of the nozzle, but once chosen, all properties in a calculation must be evaluated at the same point) Subscript denotes a property at total/stagnation conditions. In a rocket or jet engine, this means the conditions inside the combustion chamber. For example, is total pressure/stagnation pressure/chamber pressure (all equivalent). is the local Mach number of the gas is the speed of the gas (m/s) is the local speed of sound through the gas (m/s) is the ratio of specific heats of the gas is the pressure of the gas (Pa) is the density of the gas (kg/m3) is the temperature of the gas (K) is the cross sectional area of the nozzle at the point of interest (m2) is the cross sectional area of the nozzle at the sonic point, or the point where gas velocity is Mach 1 (m2). Ideally this will occur at the nozzle throat. Stagnation properties In fluid dynamics, a stagnation point is a point in a flow field where the local velocity of the fluid is zero. The isentropic stagnation state is the state a flowing fluid would attain if it underwent a reversible adiabatic deceleration to zero velocity. There are both actual and the isentropic stagnation states for a typical gas or vapor. Sometimes it is advantageous to make a distinction between the actual and the isentropic stagnation states. The actual stagnation state is the state achieved after an actual deceleration to zero velocity (as at the nose of a body placed in a fluid stream), and there may be irreversibility associated with the deceleration process. Therefore, the term "stagnation property" is sometimes reserved for the properties associated with the actual state, and the term total property is used for the isentropic stagnation state. The enthalpy is the same for both the actual and isentropic stagnation states (assuming that the actual process is adiabatic). Therefore, for an ideal gas, the actual stagnation temperature is the same as the isentropic stagnation temperature. However, the actual stagnation pressure may be less than the isentropic stagnation pressure. For this reason the term "total pressure" (meaning isentropic stagnation pressure) has particular meaning compared to the actual stagnation pressure. Flow analysis The isentropic efficiency is . The variation of fluid density for compressible flows requires attention to density and other fluid property relationships. The fluid equation of state, often unimportant for incompressible flows, is vital in the analysis of compressible flows. Also, temperature variations for compressible flows are usually significant and thus the energy equation is important. Curious phenomena can occur with compressible flows. For simplicity, the gas is assumed to be an ideal gas. The gas flow is isentropic. The gas flow is constant. The gas flow is along a straight line from gas inlet to exhaust gas exit. The gas flow behavior is compressible. There are numerous applications where a steady, uniform, isentropic flow is a good approximation to the flow in conduits. These include the flow through a jet engine, through the nozzle of a rocket, from a broken gas line, and past the blades of a turbine. = Mach number = velocity = universal gas constant = pressure = specific heat ratio = temperature * = sonic conditions = density = area = molar mass Energy equation for the steady flow: To model such situations, consider the control volume in the changing area of the conduit of Fig. The continuity equation between two sections an infinitesimal distance apart is If only the first-order terms in a differential quantity are retained, continuity takes the form The energy equation is: This simplifies to, neglecting higher-order terms: Assuming an isentropic flow, the energy equation becomes: Substitute from the continuity equation to obtain or, in terms of the Mach number: This equation applies to a steady, uniform, isentropic flow. There are several observations that can be made from an analysis of Eq. (9.26). They are: For a subsonic flow in an expanding conduit ( and ), the flow is decelerating (). For a subsonic flow in a converging conduit ( and ), the flow is accelerating (). For a supersonic flow in an expanding conduit ( and ), the flow is accelerating (). For a supersonic flow in a converging conduit ( and ), the flow is decelerating (). At a throat where , either or (the flow could be accelerating through , or it may reach a velocity such that ). Supersonic flow A nozzle for a supersonic flow must increase in area in the flow direction, and a diffuser must decrease in area, opposite to a nozzle and diffuser for a subsonic flow. So, for a supersonic flow to develop from a reservoir where the velocity is zero, the subsonic flow must first accelerate through a converging area to a throat, followed by continued acceleration through an enlarging area. The nozzles on a rocket designed to place satellites in orbit are constructed using such converging-diverging geometry. The energy and continuity equations can take on particularly helpful forms for the steady, uniform, isentropic flow through the nozzle. Apply the energy equation with between the reservoir and some location in the nozzle to obtain Any quantity with a zero subscript refers to a stagnation point where the velocity is zero, such as in the reservoir. Using several thermodynamic relations, equations can be put in the forms: If the above equations are applied at the throat (the critical area signified by an asterisk (*) superscript, where ), the energy equation takes the forms The critical area is often referenced even though a throat does not exist. For air with , the equations above provide The mass flux through the nozzle is of interest and is given by: With the use of Eq. (9.28), the mass flux, after applying some algebra, can be expressed as If the critical area is selected where , this takes the form which, when combined with previous it provides: Converging nozzle Consider a converging nozzle connecting a reservoir with a receiver. If the reservoir pressure is held constant and the receiver pressure reduced, the Mach number at the exit of the nozzle will increase until is reached, indicated by the left curve in figure 2. After is reached at the nozzle exit for , the condition of choked flow occurs and the velocity throughout the nozzle cannot change with further decreases in . This is due to the fact that pressure changes downstream of the exit cannot travel upstream to cause changes in the flow conditions. The right curve of figure 2. represents the case when the reservoir pressure is increased and the receiver pressure is held constant. When , the condition of choked flow also occurs; but Eq indicates that the mass flux will continue to increase as is increased. This is the case when a gas line ruptures. It is interesting that the exit pressure is able to be greater than the receiver pressure . Nature allows this by providing the streamlines of a gas the ability to make a sudden change of direction at the exit and expand to a much greater area resulting in a reduction of the pressure from to . The case of a converging-diverging nozzle allows a supersonic flow to occur, providing the receiver pressure is sufficiently low. This is shown in figure 3 assuming a constant reservoir pressure with a decreasing receiver pressure. If the receiver pressure is equal to the reservoir pressure, no flow occurs, represented by curve A. If is slightly less than , the flow is subsonic throughout, with a minimum pressure at the throat, represented by curve B. As the pressure is reduced still further, a pressure is reached that result in at the throat with subsonic flow throughout the remainder of the nozzle. There is another receiver pressure substantially below that of curve C that also results in isentropic flow throughout the nozzle, represented by curve D; after the throat the flow is supersonic. Pressures in the receiver in between those of curve C and curve D result in non-isentropic flow (a shock wave occurs in the flow). If is below that of curve D, the exit pressure is greater than . Once again, for receiver pressures below that of curve C, the mass flux remains constant since the conditions at the throat remain unchanged. It may appear that the supersonic flow will tend to separate from the nozzle, but just the opposite is true. A supersonic flow can turn very sharp angles, since nature provides expansion fans that do not exist in subsonic flows. To avoid separation in subsonic nozzles, the expansion angle should not exceed 10°. For larger angles, vanes are used so that the angle between the vanes does not exceed 10°. See also de Laval nozzle Fanno flow Supersonic gas separation Compressible flow References Colbert, Elton J. Isentropic Flow Through Nozzles. University of Nevada, Reno. 3 May 2001. Accessed 15 July 2014. Benson, Tom. "Isentropic Flow". NASA.gov. National Aeronautics and Space Administration. 21 June 2014. Accessed 15 July 2014. Bar-Meir, Genick. "Isenotropic Flow". Potto.org. Potto Project. 21 November 2007. Accessed 15 July 2014. Thermodynamic processes Thermodynamic entropy
Isentropic nozzle flow
[ "Physics", "Chemistry" ]
2,461
[ "Physical quantities", "Thermodynamic processes", "Thermodynamic entropy", "Entropy", "Thermodynamics", "Statistical mechanics" ]
31,695,191
https://en.wikipedia.org/wiki/Reproductive-cell%20cycle%20theory
The reproductive-cell cycle theory posits that the hormones that regulate reproduction act in an antagonistic pleiotrophic manner to control aging via cell cycle signaling; promoting growth and development early in life in order to achieve reproduction, but later in life, in a futile attempt to maintain reproduction, become dysregulated and drive senescence. Rather than seeing aging as a loss of functionality as we get older, this theory defines aging as any change in an organism over time, as evidenced by the fact that if all chemical reactions in the body were stopped, no change, and thus no aging, would occur. Since the most important change in an organism through time is the chemical reactions that result in a single cell developing into a multicellular organism, whatever controls these chemical reactions that regulate cell growth, development, and death, is believed to control aging. The theory argues that these cellular changes are directed by reproductive hormones of the hypothalamic-pituitary-gonadal axis (HPG axis). Receptors for reproductive hormones (such as estrogens, progestogens, androgens and gonadotropins) have been found to be present in all tissues of the body. Thus, HPG axis hormones normally promote growth and development of the organism early in life in order to achieve reproduction. Hormones levels then begin to change in men around age 30 and more abruptly in women when they reach menopause, around age 50. When the HPG axis becomes unbalanced, cellular growth and development is dysregulated, and cell death and dysfunction can occur, both of which can initiate senescence, the accumulated damage to cells, tissues, and organs that occurs with the passage of time and that is associated with functional loss during aging. Evidence supporting this theory comes from disease studies showing that women who reach menopause later have less heart disease and stroke, less dementia, and less osteoporosis, supporting the theory that the longer the HPG axis is in balance, the less likely one is to develop age-related diseases. Conversely, early surgical menopause has been demonstrated to increase the incidence of these diseases. However, the most compelling supportive evidence is from studies of Hormone Replacement Therapy (HRT). Research with women and men undertaking HRT has shown that taking sex hormones that are biologically identical to human hormones delays the onset, decreases the incidence of, and can reverse the course of age related illnesses such as heart disease, Alzheimer's disease, osteoporosis, and some types of cancer. However, only biological hormones appear to have these effects. The use of non-human or synthetic hormones has been shown to increase the risk of certain of these diseases. Compellingly, 18 studies have demonstrated an increase in longevity for those women taking HRT. Further studies in support of the theory have shown that suppressing the HPG axis, such as when organisms experience either caloric restriction, cold, or exercise stress, increases lifespan. This is thought to be an evolutionary conserved mechanism that allows organisms to suppress HPG axis signaling and reproduction, thereby conserving reproductive resources (germ cells) for a later time when the environment is better suited to raising offspring. By having the same hormones regulate both reproduction and aging, an animal is able to modulate its fertility and its rate of aging based on environmental conditions. Recent parabiosis studies prove many of the tenets of the Reproductive Cell-Cycle Theory of Aging. In these experiments, where a young mouse is coupled surgically with an aged mouse, circulating factors from the young mouse rejuvenates the tissues of the old mice. In particular, these studies indicate the importance of circulating factors in regulating the maintenance of neuronal (Villeda et al., 2011), vascular (Katsimpardi et al., 2014), muscular and liver (Conboy et al., 2005; Sinha et al., 2014) structure and function. See also Endocrinology of reproduction References Biogerontology Endocrinology Human reproduction Theories of ageing Theories of biological ageing Proximate theories of biological ageing
Reproductive-cell cycle theory
[ "Biology" ]
843
[ "Senescence", "Theories of biological ageing" ]
31,696,844
https://en.wikipedia.org/wiki/International%20Journal%20of%20Computer%20Assisted%20Radiology%20and%20Surgery
The International Journal of Computer Assisted Radiology and Surgery (IJCARS) is a journal for cross-disciplinary research, development and applications of Computer Assisted Radiology and Surgery (CARS). The Journal promotes interdisciplinary research and development in an international environment with a focus on the development of digital imaging and computer-based diagnostic and therapeutic procedures as well enhance the skill levels of health care professionals.The International Society for Computer Aided Surgery (ISCAS) and The Medical Image Computing and Computer Assisted Interventions Society (MICCAI) are involved in the publication of the IJCARS. References External links The Journal at the website of ISCAS World Scientific academic journals Computing in medical imaging Computer science journals Biomedical informatics journals Surgery journals English-language journals
International Journal of Computer Assisted Radiology and Surgery
[ "Biology" ]
148
[ "Bioinformatics", "Biomedical informatics journals" ]
31,698,015
https://en.wikipedia.org/wiki/Berlekamp%E2%80%93Welch%20algorithm
The Berlekamp–Welch algorithm, also known as the Welch–Berlekamp algorithm, is named for Elwyn R. Berlekamp and Lloyd R. Welch. This is a decoder algorithm that efficiently corrects errors in Reed–Solomon codes for an RS(n, k), code based on the Reed Solomon original view where a message is used as coefficients of a polynomial or used with Lagrange interpolation to generate the polynomial of degree < k for inputs and then is applied to to create an encoded codeword . The goal of the decoder is to recover the original encoding polynomial , using the known inputs and received codeword with possible errors. It also computes an error polynomial where corresponding to errors in the received codeword. The key equations Defining e = number of errors, the key set of n equations is Where E(ai) = 0 for the e cases when bi ≠ F(ai), and E(ai) ≠ 0 for the n - e non error cases where bi = F(ai) . These equations can't be solved directly, but by defining Q() as the product of E() and F(): and adding the constraint that the most significant coefficient of E(ai) = ee = 1, the result will lead to a set of equations that can be solved with linear algebra. where q = n - e - 1. Since ee is constrained to be 1, the equations become: resulting in a set of equations which can be solved using linear algebra, with time complexity . The algorithm begins assuming the maximum number of errors e = ⌊(n-k)/2⌋. If the equations can not be solved (due to redundancy), e is reduced by 1 and the process repeated, until the equations can be solved or e is reduced to 0, indicating no errors. If Q()/E() has remainder = 0, then F() = Q()/E() and the code word values F(ai) are calculated for the locations where E(ai) = 0 to recover the original code word. If the remainder ≠ 0, then an uncorrectable error has been detected. Example Consider RS(7,3) (n = 7, k = 3) defined in with α = 3 and input values: ai = i-1 : {0,1,2,3,4,5,6}. The message to be systematically encoded is {1,6,3}. Using Lagrange interpolation, F(ai) = 3 x2 + 2 x + 1, and applying F(ai) for a4 = 3 to a7 = 6, results in the code word {1,6,3,6,1,2,2}. Assume errors occur at c2 and c5 resulting in the received code word {1,5,3,6,3,2,2}. Start off with e = 2 and solve the linear equations: Starting from the bottom of the right matrix, and the constraint e2 = 1: with remainder = 0. E(ai) = 0 at a2 = 1 and a5 = 4 Calculate F(a2 = 1) = 6 and F(a5 = 4) = 1 to produce corrected code word {1,6,3,6,1,2,2}. See also Reed–Solomon error correction External links MIT Lecture Notes on Essential Coding Theory – Dr. Madhu Sudan University at Buffalo Lecture Notes on Coding Theory – Dr. Atri Rudra Algebraic Codes on Lines, Planes and Curves, An Engineering Approach – Richard E. Blahut Welch Berlekamp Decoding of Reed–Solomon Codes – L. R. Welch – The patent by Lloyd R. Welch and Elewyn R. Berlekamp Finite fields Coding theory Information theory Error detection and correction
Berlekamp–Welch algorithm
[ "Mathematics", "Technology", "Engineering" ]
795
[ "Discrete mathematics", "Coding theory", "Telecommunications engineering", "Reliability engineering", "Applied mathematics", "Error detection and correction", "Computer science", "Information theory" ]
31,698,050
https://en.wikipedia.org/wiki/Homomorphic%20signatures%20for%20network%20coding
Network coding has been shown to optimally use bandwidth in a network, maximizing information flow but the scheme is very inherently vulnerable to pollution attacks by malicious nodes in the network. A node injecting garbage can quickly affect many receivers. The pollution of network packets spreads quickly since the output of (even an) honest node is corrupted if at least one of the incoming packets is corrupted. An attacker can easily corrupt a packet even if it is encrypted by either forging the signature or by producing a collision under the hash function. This will give an attacker access to the packets and the ability to corrupt them. Denis Charles, Kamal Jain and Kristin Lauter designed a new homomorphic encryption signature scheme for use with network coding to prevent pollution attacks. The homomorphic property of the signatures allows nodes to sign any linear combination of the incoming packets without contacting the signing authority. In this scheme it is computationally infeasible for a node to sign a linear combination of the packets without disclosing what linear combination was used in the generation of the packet. Furthermore, we can prove that the signature scheme is secure under well known cryptographic assumptions of the hardness of the discrete logarithm problem and the computational Elliptic curve Diffie–Hellman. Network coding Let be a directed graph where is a set, whose elements are called vertices or nodes, and is a set of ordered pairs of vertices, called arcs, directed edges, or arrows. A source wants to transmit a file to a set of the vertices. One chooses a vector space (say of dimension ), where is a prime, and views the data to be transmitted as a bunch of vectors . The source then creates the augmented vectors by setting where is the -th coordinate of the vector . There are zeros before the first '1' appears in . One can assume without loss of generality that the vectors are linearly independent. We denote the linear subspace (of ) spanned by these vectors by . Each outgoing edge computes a linear combination, , of the vectors entering the vertex where the edge originates, that is to say where . We consider the source as having input edges carrying the vectors . By induction, one has that the vector on any edge is a linear combination and is a vector in . The k-dimensional vector is simply the first k coordinates of the vector . We call the matrix whose rows are the vectors , where are the incoming edges for a vertex , the global encoding matrix for and denote it as . In practice the encoding vectors are chosen at random so the matrix is invertible with high probability. Thus, any receiver, on receiving can find by solving where the are the vectors formed by removing the first coordinates of the vector . Decoding at the receiver Each receiver, , gets vectors which are random linear combinations of the ’s. In fact, if then Thus we can invert the linear transformation to find the ’s with high probability. History Krohn, Freedman and Mazieres proposed a theory in 2004 that if we have a hash function such that: is collision resistant – it is hard to find and such that ; is a homomorphism – . Then server can securely distribute to each receiver, and to check if we can check whether The problem with this method is that the server needs to transfer secure information to each of the receivers. The hash functions needs to be transmitted to all the nodes in the network through a separate secure channel. is expensive to compute and secure transmission of is not economical either. Advantages of homomorphic signatures Establishes authentication in addition to detecting pollution. No need for distributing secure hash digests. Smaller bit lengths in general will suffice. Signatures of length 180 bits have as much security as 1024 bit RSA signatures. Public information does not change for subsequent file transmission. Signature scheme The homomorphic property of the signatures allows nodes to sign any linear combination of the incoming packets without contacting the signing authority. Elliptic curves cryptography over a finite field Elliptic curve cryptography over a finite field is an approach to public-key cryptography based on the algebraic structure of elliptic curves over finite fields. Let be a finite field such that is not a power of 2 or 3. Then an elliptic curve over is a curve given by an equation of the form where such that Let , then, forms an abelian group with O as identity. The group operations can be performed efficiently. Weil pairing Weil pairing is a construction of roots of unity by means of functions on an elliptic curve , in such a way as to constitute a pairing (bilinear form, though with multiplicative notation) on the torsion subgroup of . Let be an elliptic curve and let be an algebraic closure of . If is an integer, relatively prime to the characteristic of the field , then the group of -torsion points, . If is an elliptic curve and then There is a map such that: (Bilinear) . (Non-degenerate) for all P implies that . (Alternating) . Also, can be computed efficiently. Homomorphic signatures Let be a prime and a prime power. Let be a vector space of dimension and be an elliptic curve such that . Define as follows: . The function is an arbitrary homomorphism from to . The server chooses secretly in and publishes a point of p-torsion such that and also publishes for . The signature of the vector is Note: This signature is homomorphic since the computation of h is a homomorphism. Signature verification Given and its signature , verify that The verification crucially uses the bilinearity of the Weil-pairing. System setup The server computes for each . Transmits . At each edge while computing also compute on the elliptic curve . The signature is a point on the elliptic curve with coordinates in . Thus the size of the signature is bits (which is some constant times bits, depending on the relative size of and ), and this is the transmission overhead. The computation of the signature at each vertex requires bit operations, where is the in-degree of the vertex . The verification of a signature requires bit operations. Proof of security Attacker can produce a collision under the hash function. If given points in find and such that and Proposition: There is a polynomial time reduction from discrete log on the cyclic group of order on elliptic curves to Hash-Collision. If , then we get . Thus . We claim that and . Suppose that , then we would have , but is a point of order (a prime) thus . In other words in . This contradicts the assumption that and are distinct pairs in . Thus we have that , where the inverse is taken as modulo . If we have r > 2 then we can do one of two things. Either we can take and as before and set for > 2 (in this case the proof reduces to the case when ), or we can take and where are chosen at random from . We get one equation in one unknown (the discrete log of ). It is quite possible that the equation we get does not involve the unknown. However, this happens with very small probability as we argue next. Suppose the algorithm for Hash-Collision gave us that Then as long as , we can solve for the discrete log of Q. But the ’s are unknown to the oracle for Hash-Collision and so we can interchange the order in which this process occurs. In other words, given , for , not all zero, what is the probability that the ’s we chose satisfies ? It is clear that the latter probability is . Thus with high probability we can solve for the discrete log of . We have shown that producing hash collisions in this scheme is difficult. The other method by which an adversary can foil our system is by forging a signature. This scheme for the signature is essentially the Aggregate Signature version of the Boneh-Lynn-Shacham signature scheme. Here it is shown that forging a signature is at least as hard as solving the elliptic curve Diffie–Hellman problem. The only known way to solve this problem on elliptic curves is via computing discrete-logs. Thus forging a signature is at least as hard as solving the computational co-Diffie–Hellman on elliptic curves and probably as hard as computing discrete-logs. See also Network coding Homomorphic encryption Elliptic-curve cryptography Weil pairing Elliptic-curve Diffie–Hellman Elliptic Curve Digital Signature Algorithm Digital Signature Algorithm References External links Comprehensive View of a Live Network Coding P2P System Signatures for Network Coding(presentation) CISS 2006, Princeton University at Buffalo Lecture Notes on Coding Theory – Dr. Atri Rudra Finite fields Coding theory Information theory Error detection and correction
Homomorphic signatures for network coding
[ "Mathematics", "Technology", "Engineering" ]
1,751
[ "Discrete mathematics", "Coding theory", "Telecommunications engineering", "Reliability engineering", "Applied mathematics", "Error detection and correction", "Computer science", "Information theory" ]
31,700,261
https://en.wikipedia.org/wiki/GCaMP
GCaMP is a genetically encoded calcium indicator (GECI) initially developed in 2001 by Junichi Nakai. It is a synthetic fusion of green fluorescent protein (GFP), calmodulin (CaM), and M13, a peptide sequence from myosin light-chain kinase. When bound to Ca2+, GCaMP fluoresces green with a peak excitation wavelength of 480 nm and a peak emission wavelength of 510 nm. It is used in biological research to measure intracellular Ca2+ levels both in vitro and in vivo using virally transfected or transgenic cell and animal lines. The genetic sequence encoding GCaMP can be inserted under the control of promoters exclusive to certain cell types, allowing for cell-type specific expression of GCaMP. Since Ca2+ is a second messenger that contributes to many cellular mechanisms and signaling pathways, GCaMP allows researchers to quantify the activity of Ca2+-based mechanisms and study the role of Ca2+ ions in biological processes of interest. Structure GCaMP consists of three key domains: an M13 domain at the N-terminus, a calmodulin (CaM) domain at the C-terminus, and a GFP domain in the center. The GFP domain is circularly permuted such that the native N- and C-termini are fused together by a six-amino-acid linking sequence, and the GFP sequence is split in the middle, creating new N- and C-termini that connect to the M13 and CaM domains. In the absence of Ca2+, the GFP chromophore is exposed to water and exists in a protonated state with minimal fluorescence intensity. Upon Ca2+ binding, the CaM domain undergoes a conformational change and tightly binds to the M13 domain alpha helix, preventing water molecules from accessing the chromophore. As a result, the chromophore rapidly deprotonates and converts into an anionic form that fluoresces brightly, similar to native GFP. History and development In 2001, Nakai et al. reported the development of GCaMP1 as a Ca2+ probe with improved signal-to-noise ratio compared to previously developed fluorescent Ca2+ probes. The first transgenic mouse expressing GCaMP1 was reported in 2004. However, at 37 ˚C (physiological temperature in mammals), GCaMP1 did not fold stably or fluoresce, limiting its potential use as a calcium indicator in vivo. In 2006, Tallini et al. subsequently reported the improvement of GCaMP1 to GCaMP2, which exhibited brighter fluorescence than GCaMP1 and greater stability at mammalian body temperatures. Tallini et al. expressed GCaMP2 in cardiomyocytes in mouse embryos to perform the first in vivo GCaMP imaging of Ca2+ in mammals. Further modifications of GCaMP, including GCaMP3, GCaMP5, GCaMP6, and jGCaMP7, have been developed to progressively improve the signal, sensitivity, and dynamic range of Ca2+ detection, with recent versions exhibiting fluorescence similar to native GFP. Variants in use Both slow variants (GCaMP6s, jGCaMP7s) and fast variants (GCaMP6f, jGCaMP7f) are used in biological and neuroscience research. The slow variants are brighter and more sensitive to small changes in Ca2+ levels, such as single action potentials; on the other hand, the fast variants are less sensitive but respond more quickly, making them useful for tracking changes in Ca2+ levels over precise timescales. GCaMP6 also has a medium variant, GCaMP6m, whose kinetics are intermediate between GCaMP6s and GCaMP6f. Other variants of jGCaMP7 are also employed: jGCaMP7b exhibits bright baseline fluorescence and is used for imaging dendrites and axons, while jGCaMP7c exhibits greater contrast between maximal and baseline fluorescence and is advantageous for imaging large populations of neurons. In 2018, Yang et al. reported the development of GCaMP-X, generated by the addition of a calmodulin-binding motif. Since the GCaMP calmodulin domain, when unbound, disrupts L-type calcium channel gating, the added calmodulin-binding motif prevents GCaMP-X from interfering with calcium-dependent signaling mechanisms. In 2020, Zhang et al. reported the development of jGCaMP8, including sensitive, medium, and fast variants, which exhibit faster kinetics and greater sensitivity than the corresponding jGCaMP7 variants. Red fluorescent indicators have also been developed: jRCaMP1a and jRCaMP1b use a circular permutation of the red fluorescent protein mRuby instead of GFP, while jRGECO1a is based on the red fluorescent protein mApple. Since the blue light used to excite GCaMP is scattered by tissue and the emitted green light is absorbed by blood, red fluorescent indicators provide more penetration and imaging depth in vivo than GCaMP. Use of red fluorescent indicators also avoids the photodamage caused by blue excitation light. Moreover, red fluorescent indicators allow for concurrent use of optogenetics, which is difficult with GCaMP because the excitation wavelengths of GCaMP overlap with those of channelrhodopsin-2 (ChR2). Simultaneous use of red and green GECIs can provide two-color visualization of different subcellular regions or cell populations. Applications in research Neuronal activity In neurons, action potentials induce neurotransmitter release at axon terminals by opening voltage-gated Ca2+ channels, allowing for Ca2+ influx. As a result, GCaMP is commonly used to measure increases in intracellular Ca2+ in neurons as a proxy for neuronal activity in multiple animal models, including Caenorhabditis elegans, zebrafish, Drosophila, and mice. Recently, genetically encoded voltage indicators (GEVIs) have been developed alongside GECIs to more directly probe neuronal activity at the cellular level in these animal models. GCaMP has played a vital role in establishing large-scale neural recordings in animals to investigate how activity patterns in neuronal networks influence behavior. For example, Nguyen et al. (2016) used GCaMP in whole-brain imaging during free movement of C. elegans to identify neurons and groups of neurons whose activity correlated with specific locomotor behaviors. Muto et al. (2003) expressed GCaMP in zebrafish embryos to measure and map the coordinated activity of spinal motor neurons to different parts of the brain during the onset, propagation, and recovery of seizures induced by pentylenetetrazol. GCaMP expression in zebrafish brains has also been used to study activation of neural circuits in cognitive processes like prey capture, impulse control, and attention. Additionally, researchers have used GCaMP to observe neuronal activity in mice by expressing it under control of the Thy1 promoter, which is found in excitatory pyramidal neurons. For instance, integration of neurons into circuits during motor learning has been tracked by using GCaMP to observe synchronized fluctuation patterns in Ca2+ levels. GCaMP has also been used to observe Ca2+ dynamics in subcellular compartments of mouse neurons: Cichon and Gan (2015) used GCaMP to show that neurons in the mouse motor cortex exhibit NMDA-driven increases in Ca2+ that are independent for each dendritic spine, thus showing that individual dendritic spines regulate synaptic plasticity. Finally, GCaMP has been used to identify activity patterns in specific regions of the mouse brain. For instance, Jones et al. (2018) used GCaMP6 in mice to measure neuronal activity in the suprachiasmatic nucleus (SCN), the mammalian circadian pacemaker, and showed that SCN neurons that produced vasoactive intestinal peptide (VIP) exhibited daily activity rhythms in vivo that correlated with VIP release. GCaMP has also been combined with fiber photometry to measure population-level Ca2+ changes within subpopulations of neurons in freely moving animals. For instance, Clarkson et al. (2017) used this method to show that neurons in the arcuate nucleus of the hypothalamus synchronize to increases in Ca2+ immediately prior to pulses of luteinizing hormone (LH). While GCaMP imaging with fiber photometry cannot track changes in Ca2+ levels within individual neurons, it provides greater temporal resolution for large-scale changes. Cardiac conduction Ca2+ currents through cardiomyocyte gap junctions mediate synchronized contraction of cardiac tissue. As a result, GCaMP expression in cardiomyocytes, both in vitro and in vivo, has been used to study Ca2+-influx-dependent excitation and contraction in zebrafish and mice. For instance, Tallini et al. (2006) expressed GCaMP2 in mouse embryos to show that, at embryonic day 10.5, electrical conduction was rapid in the atria and ventricles but slow in the atrioventricular canal. Chi et al. (2008) used a transgenic cardiac-specific GCaMP zebrafish line to image cardiomyocyte activation throughout the cardiac cycle; from their results, they characterized four developmental stages of the zebrafish cardiac conduction system and identified 17 novel mutations affecting cardiac conduction. However, uncontrolled expression of GCaMP leads to cardiac hypertrophy due to overexpression of the calmodulin motif, which interferes with intracellular calcium signaling. As a result, experiments using cardiac tissue should carefully control the level of GCaMP expression. Signaling pathway activation Since Ca2+ is a common second messenger, GCaMP has been used to monitor the activation of signaling pathways. For instance, Bonder and McCarthy (2014) used GCaMP to show that astrocytic G-protein coupled receptor (GPCR) signaling and subsequent Ca2+ release was not responsible for neurovascular coupling, the process by which changes in neuronal activity lead to changes in local blood flow. Similarly, Greer and Bear et al. (2016) used GCaMP to characterize the dynamics of Ca2+ influx in necklace olfactory neuron signaling, which uses transmembrane MS4A proteins as chemoreceptors. See also Calcium imaging Calcium in biology Calmodulin Cameleon (protein) Green fluorescent protein Myosin light-chain kinase References Sensors Proteins Biochemistry methods Cell imaging Calcium Calcium signaling
GCaMP
[ "Chemistry", "Technology", "Engineering", "Biology" ]
2,164
[ "Biochemistry methods", "Biomolecules by chemical classification", "Measuring instruments", "Signal transduction", "Calcium signaling", "Microscopy", "Biochemistry", "Proteins", "Sensors", "Cell imaging", "Molecular biology" ]
31,703,828
https://en.wikipedia.org/wiki/Mott%E2%80%93Bethe%20formula
The Mott–Bethe formula is an approximation used to calculate atomic electron scattering form factors, , from atomic X-ray scattering form factors, . The formula was derived independently by Hans Bethe and Neville Mott both in 1930, and simply follows from applying the first Born approximation for the scattering of electrons via the Coulomb interaction together with the Poisson equation for the charge density of an atom (including both the nucleus and electron cloud) in the Fourier domain. Following the first Born approximation, Here, is the magnitude of the scattering vector of momentum-transfer cross section in reciprocal space (in units of inverse distance), the atomic number of the atom, is the Planck constant, is the vacuum permittivity, and is the electron rest mass, is the Bohr Radius, and is the dimensionless X-ray scattering form factor for the electron density. The electron scattering factor has units of length, as is typical for the scattering factor, unlike the X-ray form factor , which is usually presented in dimensionless units. To perform a one-to-one comparison between the electron and X-ray form factors in the same units, the X-ray form factor should be multiplied by the square root of the Thomson cross section , where is the classical electron radius, to convert it back to a unit of length. The Mott–Bethe formula was originally derived for free atoms, and is rigorously true provided the X-ray scattering form factor is known exactly. However, in solids, the accuracy of the Mott–Bethe formula is best for large values of ( Å) because the distribution of the charge density at smaller (i.e. long distances) can deviate from the atomic distribution of electrons due the chemical bonds between atoms in a solid. For smaller values of , can be determined from tabulated values, such as those in the International Tables for Crystallography using (non)relativistic Hartree–Fock calculations, or other numerical parameterizations of the calculated charge distribution of atoms. References Atomic physics Scattering theory
Mott–Bethe formula
[ "Physics", "Chemistry" ]
417
[ "Scattering theory", "Quantum mechanics", "Scattering", "Atomic physics", " molecular", "Atomic", " and optical physics" ]
31,707,735
https://en.wikipedia.org/wiki/Generalized%20minimum-distance%20decoding
In coding theory, generalized minimum-distance (GMD) decoding provides an efficient algorithm for decoding concatenated codes, which is based on using an errors-and-erasures decoder for the outer code. A naive decoding algorithm for concatenated codes can not be an optimal way of decoding because it does not take into account the information that maximum likelihood decoding (MLD) gives. In other words, in the naive algorithm, inner received codewords are treated the same regardless of the difference between their hamming distances. Intuitively, the outer decoder should place higher confidence in symbols whose inner encodings are close to the received word. David Forney in 1966 devised a better algorithm called generalized minimum distance (GMD) decoding which makes use of those information better. This method is achieved by measuring confidence of each received codeword, and erasing symbols whose confidence is below a desired value. And GMD decoding algorithm was one of the first examples of soft-decision decoders. We will present three versions of the GMD decoding algorithm. The first two will be randomized algorithms while the last one will be a deterministic algorithm. Setup Hamming distance : Given two vectors the Hamming distance between and , denoted by , is defined to be the number of positions in which and differ. Minimum distance: Let be a code. The minimum distance of code is defined to be where Code concatenation: Given , consider two codes which we call outer code and inner code and their distances are and . A concatenated code can be achieved by where Finally we will take to be RS code, which has an errors and erasure decoder, and , which in turn implies that MLD on the inner code will be polynomial in time. Maximum likelihood decoding (MLD): MLD is a decoding method for error correcting codes, which outputs the codeword closest to the received word in Hamming distance. The MLD function denoted by is defined as follows. For every . Probability density function : A probability distribution on a sample space is a mapping from events of to real numbers such that for any event , and for any two mutually exclusive events and Expected value: The expected value of a discrete random variable is Randomized algorithm Consider the received word which was corrupted by a noisy channel. The following is the algorithm description for the general case. In this algorithm, we can decode y by just declaring an erasure at every bad position and running the errors and erasure decoding algorithm for on the resulting vector. Randomized_Decoder Given : . For every , compute . Set . For every , repeat : With probability , set otherwise set . Run errors and erasure algorithm for on . Theorem 1. Let y be a received word such that there exists a codeword such that . Then the deterministic GMD algorithm outputs . Note that a naive decoding algorithm for concatenated codes can correct up to errors. Lemma 1. Let the assumption in Theorem 1 hold. And if has errors and erasures (when compared with ) after Step 1, then Remark. If , then the algorithm in Step 2 will output . The lemma above says that in expectation, this is indeed the case. Note that this is not enough to prove Theorem 1, but can be crucial in developing future variations of the algorithm. Proof of lemma 1. For every define This implies that Next for every , we define two indicator variables: We claim that we are done if we can show that for every : Clearly, by definition Further, by the linearity of expectation, we get To prove (2) we consider two cases: -th block is correctly decoded (Case 1), -th block is incorrectly decoded (Case 2): Case 1: Note that if then , and implies and . Further, by definition we have Case 2: In this case, and Since . This follows another case analysis when or not. Finally, this implies In the following sections, we will finally show that the deterministic version of the algorithm above can do unique decoding of up to half its design distance. Modified randomized algorithm Note that, in the previous version of the GMD algorithm in step "3", we do not really need to use "fresh" randomness for each . Now we come up with another randomized version of the GMD algorithm that uses the same randomness for every . This idea follows the algorithm below. Modified_Randomized_Decoder Given : , pick at random. Then every for every : Set . Compute . If , set otherwise set . Run errors and erasure algorithm for on . For the proof of Lemma 1, we only use the randomness to show that In this version of the GMD algorithm, we note that The second equality above follows from the choice of . The proof of Lemma 1 can be also used to show for version2 of GMD. In the next section, we will see how to get a deterministic version of the GMD algorithm by choosing from a polynomially sized set as opposed to the current infinite set . Deterministic algorithm Let . Since for each , we have where for some . Note that for every , the step 1 of the second version of randomized algorithm outputs the same . Thus, we need to consider all possible value of . This gives the deterministic algorithm below. Deterministic_Decoder Given : , for every , repeat the following. Compute for . Set for every . If , set otherwise set . Run errors-and-erasures algorithm for on . Let be the codeword in corresponding to the output of the algorithm, if any. Among all the output in 4, output the one closest to Every loop of 1~4 can be run in polynomial time, the algorithm above can also be computed in polynomial time. Specifically, each call to an errors and erasures decoder of errors takes time. Finally, the runtime of the algorithm above is where is the running time of the outer errors and erasures decoder. See also Concatenated codes Reed Solomon error correction Welch Berlekamp algorithm References University at Buffalo Lecture Notes on Coding Theory – Atri Rudra MIT Lecture Notes on Essential Coding Theory – Madhu Sudan University of Washington – Venkatesan Guruswami G. David Forney. Generalized Minimum Distance decoding. IEEE Transactions on Information Theory, 12:125–131, 1966 Error detection and correction Coding theory Finite fields Information theory
Generalized minimum-distance decoding
[ "Mathematics", "Technology", "Engineering" ]
1,319
[ "Discrete mathematics", "Coding theory", "Telecommunications engineering", "Reliability engineering", "Applied mathematics", "Error detection and correction", "Computer science", "Information theory" ]
35,910,532
https://en.wikipedia.org/wiki/Ing%C3%A9nieur%20des%20%C3%A9tudes%20et%20de%20l%27exploitation%20de%20l%27aviation%20civile
The IEEAC is the corps of the Ingénieur des études et de l'exploitation de l'aviation civile (in English Civil Aviation Operations Engineer). It is the sixth corps of the French Directorate General for Civil Aviation (DGAC) by size, with 777 IEEAC out of 13,076 agents as of 1 July 2011. Application The application process is by a competitive examination. It is for students of classes préparatoires aux grandes écoles. Also, air traffic controllers, air traffic safety electronics personnel and Techniciens supérieurs des études et de l'exploitation de l'aviation civile with more than 10 years professional experience can integrate into the IEEAC corps with another application process. Career This corps has five grades (in descending order): IEEAC primary class 1: 2 steps. IEEAC main class 2: 8 steps. IEEAC normal class: 11 steps. IEEAC internship: 1 step. IEEAC student: 2 steps. Training The third-year training of IEEAC is performed at the École nationale de l'aviation civile (French civil aviation university) of Toulouse after a competitive examination. Students graduate with the Diplôme d'ingénieur ENAC (ENAC engineer degree) recognized by the Commission des Titres d'Ingénieur before joining the DGAC or Bureau d'Enquêtes et d'Analyses pour la Sécurité de l'Aviation Civile. Job The IEEAC corps work in many technical and economical arenas such as air transport, air navigation, and safety of civil aviation in Metropolitan France or in overseas department. They perform testing, operating, research, coaching or teaching missions. Distribution They work for the DGAC as well as for Bureau d'Enquêtes et d'Analyses pour la Sécurité de l'Aviation Civile. Income IEEAC students have an income of 2,400 euros per month during the third-year training at ENAC. When they start their career, they have an income of 59,000 euros per year. See also French Civil Service References Appendices Bibliography Ariane Gilotte, Jean-Philippe Husson et Cyril Lazerge, 50 ans d'Énac au service de l'aviation, Édition S.E.E.P.P, 1999 External links ENAC's Engineers Air traffic control in France École nationale de l'aviation civile Aviation licenses and certifications Professional certification in engineering
Ingénieur des études et de l'exploitation de l'aviation civile
[ "Engineering" ]
511
[ "École nationale de l'aviation civile", "Aerospace engineering organizations" ]
35,911,437
https://en.wikipedia.org/wiki/Biotechnology%20Regulatory%20Authority%20of%20India
The Biotechnology Regulatory Authority of India (BRAI) is a proposed regulatory body in India for uses of biotechnology products including genetically modified organisms (GMOs). The institute was first suggested under the Biotechnology Regulatory Authority of India (BRAI) draft bill prepared by the Department of Biotechnology in 2008. Since then, it has undergone several revisions. The bill has faced opposition from farmer groups and anti-GMO activists. Overview On 23 January 2003, India ratified the Cartagena Protocol which protects biodiversity from potential risks of genetically modified organisms, the products of modern biotechnology. The protocol requires setting up of a regulatory body. Currently, the Genetic Engineering Approvals Committee, a body under the Ministry of Environment and Forests (India) is responsible for approval of genetically engineered products in India. If the bill is passed, the responsibility will be taken over by the Environment Appraisal Panel, a sub-division of the BRAI. According to the bill, BRAI will have a chairperson, two full-time members and two part-time members; all will be required to have expertise in life sciences and biotechnology in agriculture, health care, environment and general biology. The bill also proposes setting up an inter-ministerial governing body, to oversee the performance of BRAI, and a National Biotechnology Advisory Council of stakeholders to provide feedback on the use of biotechnology products and organisms in the society. The regulatory body will be an autonomous and statutory agency to regulate the research, transport, import, and manufacture biotechnology products and organisms. Criticism Suman Sahai, founder of the Gene Campaign, has called the bill flawed. According to her, the bill is proposing new institutes without clearly defining their powers and responsibilities. She has also stated that the bill was introduced without consulting the people who will be affected by the bill. P. M. Bhargava, founder of the Centre for Cellular and Molecular Biology, has also opposed the bill. He has called the bill unconstitutional, as agricultural policy is the domain of state governments. He pointed out that the bill proposes formation of several subdivisions and has argued that they will consist of bureaucrats with no scientific knowledge. He has accused the Department of Biotechnology, which will be involved in selection of members, as a promoter of genetic technology in India. He has pointed out that the broadly defined term "confidential commercial information" has been kept outside the purview of the Right to Information Act. He had stated that the bill uses vague wordings which would criminalize sequencing or isolation of DNA and PCR techniques, requiring approval for each usage. Thus, hindering research and education. He pointed out the bill has no provision for mandatory labelling of GM foods. He criticized giving the body power to punish parties making false or misleading statements about GM crops, calling it unprecedented. In September 2010, Jairam Ramesh, then Environment Minister, pointed out that the body is only deals with safety and efficacy of biotechnology products. The issue of commercialization has been left unaddressed. The decisions regarding commercialization can fall under the purview of Ministry of Environment and Forests, Ministry of Health, Ministry of Agriculture, or Department of Science and Technology. On the other hand, Association of Biotechnology Led Enterprises (ABLE) has supported the bill. J.S. Rehman, an entomologist and a former member of the Review Committee on Genetic Manipulation, has stated that most protesters associate genetic engineering with Monsanto, as a result development of Indian biotech is being hindered. See also Regulation of the release of genetic modified organisms Bt brinjal Genetically modified food controversies BT cotton Anti GM v/s Pro GM References Further reading Proposed laws of India Biotechnology in India Regulators of biotechnology products Life sciences industry Regulatory agencies of India
Biotechnology Regulatory Authority of India
[ "Biology" ]
747
[ "Life sciences industry", "Biotechnology products", "Regulation of biotechnologies", "Biotechnology by country", "Regulators of biotechnology products", "Biotechnology in India" ]
35,913,663
https://en.wikipedia.org/wiki/Journal%20of%20Mechanics%20of%20Materials%20and%20Structures
The Journal of Mechanics of Materials and Structures is a peer-reviewed scientific journal covering research on the mechanics of materials and deformable structures of all types. It was established by Charles R. Steele, who was also the first editor-in-chief. History The journal was established in 2006 after 21 of the 23 members of the editorial board of the International Journal of Solids and Structures resigned in protest of Elsevier's "pressure for increased profits out of the limited institutional resources." In their founding issue, the editors of the new journal indicated several desires for the publication, including, "a low subscription price that will not grow faster than the number of pages and indeed may drop as the subscriber base expands." Abstracting and indexing The journal is abstracted and indexed in Current Contents/Engineering, Computing & Technology, Ei Compendex, Science Citation Index Expanded, and Scopus. According to the Journal Citation Reports, the journal has a 2020 impact factor of 0.987. References External links Materials science journals English-language journals Academic journals established in 2006 Mathematical Sciences Publishers academic journals 5 times per year journals
Journal of Mechanics of Materials and Structures
[ "Materials_science", "Engineering" ]
230
[ "Materials science journals", "Materials science" ]
21,630,108
https://en.wikipedia.org/wiki/Driver%27s%20manual
A driver's manual is a book created by the DMV of a corresponding state in order to give information to people about the state's driving laws. This can include information such as how to get a license, license renewal, road laws, driving restrictions, etc. "In the U.S. there is no central organization that is responsible for the creation of Driver's Manuals." (Idaho Driver's Manual). As a result, there is no set of rules for the states to create the manuals, so all driver's manuals vary by state. However, every state does still follow general guidelines when creating the manuals. The beginning of every manual starts with how to get a driver's license. It informs potential drivers about what types of identification are needed, as well as the eligibility requirements necessary to get a license. In most states, you "must provide documentary proof of their full legal name, age, Social Security number, citizenship, or legal presence and address." (Ohio Driver's Manual). In all states there is a minimum age requirement for getting a driver's permit, which later leads into receiving a full driver's license. This age limit varies by state. "The person must also be in good general health, and can have good vision with or without glasses or contacts."(New Jersey Driver's Manual). There is also usually a payment fee in order to receive your license. Along with getting a license, all states also offer voter registration and becoming an organ donor when applying for your license. Every state requires taking a written test to receive your driver's permit. Every state also requires a driver's test that you must pass in order to get your license. However, only a few of the states' manuals actually go into detail about what exactly they will test you on for the driving test. All manuals proceed to talk about the specifics of how to drive and the rules of the road. Every manual includes a section that goes into detail about car and driver safety. All states require vehicle inspection, but only some require annual inspection. Driving while intoxicated is illegal in the United States. Almost all states have a "minimum blood alcohol level while driving of .08%" (Kentucky Driver's Manual). For seat belts, 49 states and the District of Columbia have passed laws requiring seat belt use by at least all occupants of the front seat. New Hampshire is the only state with no such requirement for adults. However, in all states anyone under the age of 18 is required to wear a seat belt. Vehicles must always make way for emergency vehicles. See also The Highway Code, the equivalent guide in the United Kingdom Malta's The Highway Code, the equivalent guide in Malta Road Users' Code, the equivalent guide in Hong Kong References Automotive safety Road user guides
Driver's manual
[ "Physics" ]
582
[ "Physical systems", "Transport", "Transport stubs" ]
21,631,514
https://en.wikipedia.org/wiki/Late%20protein
A late protein is a viral protein that is formed after replication of the virus. One example is VP4 from simian virus 40 (SV40). In Human papillomaviruses In Human papillomavirus (HPV), two late proteins are involved in capsid formation: a major (L1) and a minor (L2) protein, in the approximate proportion 95:5%. L1 forms a pentameric assembly unit of the viral shell in a manner that closely resembles VP1 from polyomaviruses. Intermolecular disulphide bonding holds the L1 capsid proteins together. L1 capsid proteins can bind via its nuclear localisation signal (NLS) to karyopherins Kapbeta(2) and Kapbeta(3) and inhibit the Kapbeta(2) and Kapbeta(3) nuclear import pathways during the productive phase of the viral life cycle. Surface loops on L1 pentamers contain sites of sequence variation between HPV types. L2 minor capsid proteins enter the nucleus twice during infection: in the initial phase after virion disassembly, and in the productive phase when it assembles into replicated virions along with L1 major capsid proteins. L2 proteins contain two nuclear localisation signals (NLSs), one at the N-terminal (nNLS) and the other at the C-terminal (cNLS). L2 uses its NLSs to interact with a network of karyopherins in order to enter the nucleus via several import pathways. L2 from HPV types 11 and 16 was shown to interact with karyopherins Kapbeta(2) and Kapbeta(3). L2 capsid proteins can also interact with viral dsDNA, facilitating its release from the endocytic compartment after viral uncoating. See also Early protein References Protein families Protein domains Proteins Viral protein class
Late protein
[ "Chemistry", "Biology" ]
411
[ "Biomolecules by chemical classification", "Protein classification", "Protein domains", "Molecular biology", "Proteins", "Protein families" ]
21,632,928
https://en.wikipedia.org/wiki/Trichosanthin
Trichosanthin is a ribosome-inactivating protein. It is derived from Trichosanthes kirilowii. It is also an abortifacient. References External links Proteins Ribosome-inactivating proteins
Trichosanthin
[ "Chemistry" ]
52
[ "Biomolecules by chemical classification", "Pharmacology", "Medicinal chemistry stubs", "Molecular biology", "Proteins", "Pharmacology stubs" ]
21,633,417
https://en.wikipedia.org/wiki/Alisporivir
Alisporivir (INN), or Debio 025, DEB025, (or UNIL-025) is a cyclophilin inhibitor. Its structure is reminiscent of, and synthesized from ciclosporin. It inhibits cyclophilin A. Alisporivir is not immunosuppressive. It is being researched for potential use in the treatment of hepatitis C. It has also been investigated for Duchenne muscular dystrophy and may have therapeutic potential in Alzheimer's disease. Alisporivir is under development by Debiopharm for Japan and by Novartis for the rest of the world (licence granted by Debiopharm) since February 2010. References Antiviral drugs Peptides Orphan drugs
Alisporivir
[ "Chemistry", "Biology" ]
161
[ "Biomolecules by chemical classification", "Antiviral drugs", "Molecular biology", "Biocides", "Peptides" ]
21,633,976
https://en.wikipedia.org/wiki/Aldo-keto%20reductase
The aldo-keto reductase family is a family of proteins that are subdivided into 16 categories; these include a number of related monomeric NADPH-dependent oxidoreductases, such as aldehyde reductase, aldose reductase, prostaglandin F synthase, xylose reductase, rho crystallin, and many others. Structure All possess a similar structure, with a beta-alpha-beta fold characteristic of nucleotide binding proteins. The fold comprises a parallel beta-8/alpha-8-barrel, which contains a novel NADP-binding motif. The binding site is located in a large, deep, elliptical pocket in the C-terminal end of the beta sheet, the substrate being bound in an extended conformation. The hydrophobic nature of the pocket favours aromatic and apolar substrates over highly polar ones. Binding of the NADPH coenzyme causes a massive conformational change, reorienting a loop, effectively locking the coenzyme in place. This binding is more similar to FAD- than to NAD(P)-binding oxidoreductases. Examples Some proteins of this family contain a potassium channel beta chain regulatory domain; these are reported to have oxidoreductase activity. See also AKR1 Steroidogenic enzyme References Protein domains Protein families EC 1.1
Aldo-keto reductase
[ "Biology" ]
283
[ "Protein families", "Protein domains", "Protein classification" ]
21,640,037
https://en.wikipedia.org/wiki/Gonidium
A gonidium (plural gonidia) is an asexual reproductive cell or group of cells, especially in algae. References Algal anatomy
Gonidium
[ "Biology" ]
32
[ "Algae stubs", "Algae" ]
21,641,229
https://en.wikipedia.org/wiki/Drug-induced%20pruritus
Drug-induced pruritus is itchiness of the skin caused by medication, a pruritic reaction that is generalized. Signs and symptoms Depending on the causing agent, symptoms may start out acutely, go away when the drug is stopped, or develop into a chronic pruritus that lasts longer than six weeks. Causes A common anti-malarial medication called chloroquine may cause pruritus for unknown reasons. Other antimalarials like amodiaquine, halofantrine, and hydroxychloroquine have also been linked to pruritus, albeit less frequently and to a lesser extent. Another class of medications known to occasionally cause itching is known as serotonin reuptake inhibitors. Itching is one of the most frequent adverse effects of opioid therapy. A common artificial colloid used in clinical fluid management is hydroxyethyl starch (HES). Well-defined side effects, such as coagulopathy, clinical bleeding, anaphylactoid reactions, and pruritus, can make using HES more difficult. Epidemiology Thirty-three percent of the 3,671 cases of cutaneous adverse drug reactions included itching as a common complaint. See also Pruritus List of cutaneous conditions References External links DermNet Pruritic skin conditions Drug-induced diseases
Drug-induced pruritus
[ "Chemistry" ]
282
[ "Drug-induced diseases", "Drug safety" ]
26,008,674
https://en.wikipedia.org/wiki/Senftleben%E2%80%93Beenakker%20effect
The Senftleben–Beenakker effect is the dependence on a magnetic or electric field of transport properties (such as viscosity and heat conductivity) of polyatomic gases. The effect is caused by the precession of the (magnetic or electric) dipole of the gas molecules between collisions. The resulting rotation of the molecule averages out the nonspherical part of the collision cross-section, if the field is large enough that the precession time is short compared to the time between collisions (this requires a very dilute gas). The change in the collision cross-section, in turn, can be measured as a change in the transport properties. The magnetic field dependence of the transport properties can also include a transverse component; for example, a heat flow perpendicular to both temperature gradient and magnetic field. This is the molecular analogue of the Hall effect and Righi–Leduc effect for electrons. A key difference is that the gas molecules are neutral, unlike the electrons, so the magnetic field exerts no Lorentz force. An analogous magnetotransverse heat conductivity has been discovered for photons and phonons. The Senftleben–Beenakker effect owes its name to the physicists Hermann Senftleben (Münster University, Germany) and Jan Beenakker (Leiden University, The Netherlands), who discovered it, respectively, for paramagnetic gases (such as NO and O2) and diamagnetic gases (such as N2 and CO). The change in the transport properties is smaller in a diamagnetic gas, because the magnetic moment is not intrinsic (as it is in a paramagnetic gas), but induced by the rotation of a nonspherical molecule. The importance of the effect is that it provides information on the angular dependence of the intermolecular potential. The theory to extract that information from transport measurements is based on the Waldmann–Snider equation (a quantum mechanical version of the Boltzmann equation for gases with rotating molecules). The entire field is reviewed in a two-volume monograph. See also Kinetic theory Thermal Hall effect References External links Historical remarks on the experiment by Jan J. M. Beenakker. Historical remarks on the theory by Siegfried Hess (a student of Ludwig Waldmann). Gases
Senftleben–Beenakker effect
[ "Physics", "Chemistry" ]
479
[ "Statistical mechanics", "Gases", "Phases of matter", "Matter" ]
26,014,321
https://en.wikipedia.org/wiki/Tolerance%20relation
In universal algebra and lattice theory, a tolerance relation on an algebraic structure is a reflexive symmetric relation that is compatible with all operations of the structure. Thus a tolerance is like a congruence, except that the assumption of transitivity is dropped. On a set, an algebraic structure with empty family of operations, tolerance relations are simply reflexive symmetric relations. A set that possesses a tolerance relation can be described as a tolerance space. Tolerance relations provide a convenient general tool for studying indiscernibility/indistinguishability phenomena. The importance of those for mathematics had been first recognized by Poincaré. Definitions A tolerance relation on an algebraic structure is usually defined to be a reflexive symmetric relation on that is compatible with every operation in . A tolerance relation can also be seen as a cover of that satisfies certain conditions. The two definitions are equivalent, since for a fixed algebraic structure, the tolerance relations in the two definitions are in one-to-one correspondence. The tolerance relations on an algebraic structure form an algebraic lattice under inclusion. Since every congruence relation is a tolerance relation, the congruence lattice is a subset of the tolerance lattice , but is not necessarily a sublattice of . As binary relations A tolerance relation on an algebraic structure is a binary relation on that satisfies the following conditions. (Reflexivity) for all (Symmetry) if then for all (Compatibility) for each -ary operation and , if for each then . That is, the set is a subalgebra of the direct product of two . A congruence relation is a tolerance relation that is also transitive. As covers A tolerance relation on an algebraic structure is a cover of that satisfies the following three conditions. For every and , if , then . In particular, no two distinct elements of are comparable. (To see this, take .) For every , if is not contained in any set in , then there is a two-element subset such that is not contained in any set in . For every -ary and , there is a such that . (Such a need not be unique.) Every partition of satisfies the first two conditions, but not conversely. A congruence relation is a tolerance relation that also forms a set partition. Equivalence of the two definitions Let be a tolerance binary relation on an algebraic structure . Let be the family of maximal subsets such that for every . Using graph theoretical terms, is the set of all maximal cliques of the graph . If is a congruence relation, is just the quotient set of equivalence classes. Then is a cover of and satisfies all the three conditions in the cover definition. (The last condition is shown using Zorn's lemma.) Conversely, let be a cover of and suppose that forms a tolerance on . Consider a binary relation on for which if and only if for some . Then is a tolerance on as a binary relation. The map is a one-to-one correspondence between the tolerances as binary relations and as covers whose inverse is . Therefore, the two definitions are equivalent. A tolerance is transitive as a binary relation if and only if it is a partition as a cover. Thus the two characterizations of congruence relations also agree. Quotient algebras over tolerance relations Let be an algebraic structure and let be a tolerance relation on . Suppose that, for each -ary operation and , there is a unique such that Then this provides a natural definition of the quotient algebra of over . In the case of congruence relations, the uniqueness condition always holds true and the quotient algebra defined here coincides with the usual one. A main difference from congruence relations is that for a tolerance relation the uniqueness condition may fail, and even if it does not, the quotient algebra may not inherit the identities defining the variety that belongs to, so that the quotient algebra may fail to be a member of the variety again. Therefore, for a variety of algebraic structures, we may consider the following two conditions. (Tolerance factorability) for any and any tolerance relation on , the uniqueness condition is true, so that the quotient algebra is defined. (Strong tolerance factorability) for any and any tolerance relation on , the uniqueness condition is true, and . Every strongly tolerance factorable variety is tolerance factorable, but not vice versa. Examples Sets A set is an algebraic structure with no operations at all. In this case, tolerance relations are simply reflexive symmetric relations and it is trivial that the variety of sets is strongly tolerance factorable. Groups On a group, every tolerance relation is a congruence relation. In particular, this is true for all algebraic structures that are groups when some of their operations are forgot, e.g. rings, vector spaces, modules, Boolean algebras, etc. Therefore, the varieties of groups, rings, vector spaces, modules and Boolean algebras are also strongly tolerance factorable trivially. Lattices For a tolerance relation on a lattice , every set in is a convex sublattice of . Thus, for all , we have In particular, the following results hold. if and only if . If and , then . The variety of lattices is strongly tolerance factorable. That is, given any lattice and any tolerance relation on , for each there exist unique such that and the quotient algebra is a lattice again. In particular, we can form quotient lattices of distributive lattices and modular lattices over tolerance relations. However, unlike in the case of congruence relations, the quotient lattices need not be distributive or modular again. In other words, the varieties of distributive lattices and modular lattices are tolerance factorable, but not strongly tolerance factorable. Actually, every subvariety of the variety of lattices is tolerance factorable, and the only strongly tolerance factorable subvariety other than itself is the trivial subvariety (consisting of one-element lattices). This is because every lattice is isomorphic to a sublattice of the quotient lattice over a tolerance relation of a sublattice of a direct product of two-element lattices. See also Dependency relation Quasitransitive relation—a generalization to formalize indifference in social choice theory Rough set References Further reading Gerasin, S. N., Shlyakhov, V. V., and Yakovlev, S. V. 2008. Set coverings and tolerance relations. Cybernetics and Sys. Anal. 44, 3 (May 2008), 333–340. Hryniewiecki, K. 1991, Relations of Tolerance, FORMALIZED MATHEMATICS, Vol. 2, No. 1, January–February 1991. Universal algebra Lattice theory Reflexive relations Symmetric relations Approximations
Tolerance relation
[ "Physics", "Mathematics" ]
1,417
[ "Lattice theory", "Universal algebra", "Symmetric relations", "Fields of abstract algebra", "Mathematical relations", "Order theory", "Approximations", "Symmetry" ]
26,014,806
https://en.wikipedia.org/wiki/Swedish%20Radiation%20Safety%20Authority
The Swedish Radiation Safety Authority () is the Swedish government authority responsible for radiation protection. It sorts under the Ministry of the Environment. It was created on 1 July 2008 with the merging of the Swedish Nuclear Power Inspectorate and the Swedish Radiation Protection Authority. It employs 300 people and is located in Stockholm, with an annual budget of about 400 million Swedish krona. Its Director-General is Nina Cromnier. On the first of March 2022, the Swedish Radiation Safety Authority increased their readiness to handle an "radiological emergency" in the wake of Russian invasion of Ukraine. References External links Official Website in English Government agencies of Sweden Medical and health organizations based in Sweden Radiation protection organizations Radiology organizations
Swedish Radiation Safety Authority
[ "Physics", "Engineering" ]
141
[ "Nuclear and atomic physics stubs", "Radiation protection organizations", "Nuclear organizations", "Nuclear physics" ]
40,071,018
https://en.wikipedia.org/wiki/Liouville%E2%80%93Arnold%20theorem
In dynamical systems theory, the Liouville–Arnold theorem states that if, in a Hamiltonian dynamical system with n degrees of freedom, there are also n independent, Poisson commuting first integrals of motion, and the level sets of all first integrals are compact, then there exists a canonical transformation to action-angle coordinates in which the transformed Hamiltonian is dependent only upon the action coordinates and the angle coordinates evolve linearly in time. Thus the equations of motion for the system can be solved in quadratures if the level simultaneous set conditions can be separated. The theorem is named after Joseph Liouville and Vladimir Arnold. History The theorem was proven in its original form by Liouville in 1853 for functions on with canonical symplectic structure. It was generalized to the setting of symplectic manifolds by Arnold, who gave a proof in his textbook Mathematical Methods of Classical Mechanics published 1974. Statement Preliminary definitions Let be a -dimensional symplectic manifold with symplectic structure . An integrable system on is a set of functions on , labelled , satisfying (Generic) linear independence: on a dense set Mutually Poisson commuting: the Poisson bracket vanishes for any pair of values . The Poisson bracket is the Lie bracket of vector fields of the Hamiltonian vector field corresponding to each . In full, if is the Hamiltonian vector field corresponding to a smooth function , then for two smooth functions , the Poisson bracket is . A point is a regular point if . The integrable system defines a function . Denote by the level set of the functions , or alternatively, . Now if is given the additional structure of a distinguished function , the Hamiltonian system is integrable if can be completed to an integrable system, that is, there exists an integrable system . Theorem If is an integrable Hamiltonian system, and is a regular point, the theorem characterizes the level set of the image of the regular point : is a smooth manifold which is invariant under the Hamiltonian flow induced by (and therefore under Hamiltonian flow induced by any element of the integrable system). If is furthermore compact and connected, it is diffeomorphic to the N-torus . There exist (local) coordinates on such that the are constant on the level set while . These coordinates are called action-angle coordinates. Examples of Liouville-integrable systems A Hamiltonian system which is integrable is referred to as 'integrable in the Liouville sense' or 'Liouville-integrable'. Famous examples are given in this section. Some notation is standard in the literature. When the symplectic manifold under consideration is , its coordinates are often written and the canonical symplectic form is . Unless otherwise stated, these are assumed for this section. Harmonic oscillator: with . Defining , the integrable system is . Central force system: with with some potential function. Defining the angular momentum , the integrable system is . Integrable tops: The Lagrange, Euler and Kovalevskaya tops are integrable in the Liouville sense. See also Frobenius integrability: a more general notion of integrability. Integrable systems References Hamiltonian mechanics Integrable systems Theorems in dynamical systems
Liouville–Arnold theorem
[ "Physics", "Mathematics" ]
685
[ "Theorems in dynamical systems", "Mathematical theorems", "Integrable systems", "Theoretical physics", "Classical mechanics", "Hamiltonian mechanics", "Mathematical problems", "Dynamical systems" ]
40,072,705
https://en.wikipedia.org/wiki/Dirk%20Willem%20van%20Krevelen
Dirk Willem van Krevelen (8 November 1914, Rotterdam – 27 October 2001, Arnhem) was a prominent Dutch chemical engineer, coal and polymer scientist. He successfully combined an industrial career, managing a research division at DSM, and an academic career, as a professor of Delft Technical College. His contributions span a wide range of research fields, and his name is linked to the van Krevelen–Hoftyzer diagram for chemical gas absorption, the Mars–van Krevelen mechanism for catalytic oxidation reactions, the van Krevelen–Chermin method to estimate the free energy of organic compounds, the van Krevelen diagram that is used in coal and coal processes, the van Krevelen method to calculate additive properties of polymers, and the van Krevelen–Hoftyzer relationship on the viscosity of polymer fluids. He is the author of numerous scientific publications and several classic monographs, amongst which are Coal: Typology, Chemistry, Physics, Constitution and Properties of Polymers: Correlations with Chemical Structure. Early life Dirk van Krevelen was born in Rotterdam to the family of bookkeeper Dirk Willem van Krevelen Sr and Huberta van Krevelen (née Regoort). Education From 1927 to 1933 he studied at Marnix Gymnasium in Rotterdam. In 1933 he was enrolled at Leiden University and studied chemistry under Anton Eduard van Arkel. There he received his "kandidaats" (bachelor) degree in 1935 and "doctoraal" (masters) degree in 1938. In parallel, van Krevelen also completed his minor in chemical technology under Professor Hein Waterman at Delft Technical College. Career At the time when Dirk van Krevelen worked with Professor Waterman, Mr Waterman was a scientific advisor to Royal Dutch Shell, who funded the employment of three private assistants that performed fundamental research on oil products and processes. As of 1937, van Krevelen was employed as one of these assistants to Professor Waterman, and worked on three topics: the chemical thermodynamics of oil hydrocarbons, the polymerization of ethylene, as part of attempts to improve the anti-knock properties of gasoline, and the induced pyrolysis of methane. The latter project became the topic of his doctorate (1939). When, at the start of World War II Shell stopped employing new scientists, soon Professor Waterman, who was a Jew, was forced to retire from Shell. Yet he managed to help Dirk van Krevelen to obtain a research position in the newly created Central Laboratory of the Dutch State Mines (DSM) starting from 1940. The Central Laboratory was headed by Gerrit Berkhoff. van Krevelen began his research activities in DSM's physical chemistry department. In 1943, van Krevelen became a department manager, as head of the newly created research department on chemical engineering. In 1948, he was promoted to the position of research leader of the Central Laboratory of DSM, a position where he was responsible for directing the research activities. In 1955, van Krevelen became head of the Central Laboratory. In 1959, van Krevelen left DSM and joined the Algemene Kunstzijde Unie (AKU; General Rayon Union), a polymer company. van Krevelen became member of the board of directors of AKU with the special task of supervising the research and development activities of the company. In 1969, AKU merged with the Koninklijke Zout Organon (KZO) in 1969 to become AKZO. van Krevelen became president of AKZO Research and Engineering, until he retired from AKZO in 1976. Contributions In 1951, van Krevelen was one of the founding editors of the journal Chemical Engineering Science. van Krevelen took also part in the organization of the 1st European Symposium on Chemical Reaction Engineering which was held in Amsterdam in 1957. Private life In July 1939 Dirk van Krevelen married Frieda Kreisel. They had three sons and one daughter. Death Dirk Willem van Krevelen died on 27 October 2001 in Arnhem. Selected works De geïnduceerde pyrolyse van methaan. Dissertation, Technological University of Delft, 19 December 1939. With H. A. J. Pieters. The Wet Purification of Coal Gas and Similar Gases by the Staatsmijnen-Otto-Process New York: Elsevier, 1946. With P. J. Hoftijzer. Kinetics of Gas-Liquid Reactions. Part I:General Theory. Recueil des Travaux Chimiques des Pays-Bas 67 (1948): 563–568. Graphical-statistical Method for the Study of Structure and Reaction Processes of Coal. Fuel 29 (1950): 269–283. With H. A. G. Chermin. Estimation of the Free Enthalpy (Gibbs Free Energy) of Formation of Organic Compounds from Group Contributions. Chemical Engineering Science 1 (1951): 66–80, 238. With P. Mars. Oxidations Carried Out by Means of Vanadium Oxide Catalysts. Proceedings of the Conference on Oxidation Processes, Held in Amsterdam, 6–8 May 1954. Special Supplement to Chemical Engineering Science 3 (1954): 41–57 . Coal: Typology, Chemistry, Physics, Constitution. Amsterdam:Elsevier, 1961. [First edition, 1957, published (with Jan Schuijer) as Coal Science: Aspects of Coal Constitution.] 3rd ed.,1981; 4th ed., 1993. Waterman en de steenkoolchemie. In De Oogst: een overzicht van het wetenschappelijk werk van Prof. dr. ir. H. I. Waterman, te zamen gebracht ter gelegenheid van zijn aftreden als hoogleraar in de chemische technologie aan de Technische Hogeschool te Delft, pp. 24–29, n.p., n.d. [Delft, 1959]. Werdegang und Weg in der chemischen Technologie: Arbeitserinnerungen und Ausblick. Darmstadt: Technische Hochschule, 1966. Lecture given on the occasion of receiving the honorary doctorate. With P. J. Hoftyzer. Properties of Polymers: Correlations with Chemical Structure. Amsterdam: Elsevier, 1972; 2nd ed., 1976; 3rd ed., 1990. Selected Papers on Chemical Engineering Science. Amsterdam:Elsevier, 1976. In Retrospect: Een keuze uit de voordrachten. Amsterdam: Meulenhoff, 1980. With a comprehensive bibliography of his over 250 publications up to 1980. Sleutelwoorden in de proefondervindelijke wijsbegeerte. Rotterdam: Bataafsch Genootschap, 1987. Professor Hein Israel Waterman, 1889–1961: Onderzoeker—Vernieuwer—Leermeester.” In Waterman Symposium: Aula TU Delft, 28 April 1989, voordrachtenbundel, 72–84. Delft: Technische Universiteit Delft, 1989. Vijftig jaar activiteit in de Chemische Technologie. In Werken aan scheikunde. 24 memoires van hen die de Nederlandse chemie deze eeuw groot hebben gemaakt, 243–263. Delft: Delft University Press, 1993. His autobiography. References Engineers from Rotterdam Polymer scientists and engineers Academic staff of the Delft University of Technology 1914 births 2001 deaths Chemical engineering academics Leiden University alumni
Dirk Willem van Krevelen
[ "Chemistry", "Materials_science" ]
1,615
[ "Chemical engineers", "Physical chemists", "Polymer chemistry", "Polymer scientists and engineers", "Chemical engineering academics" ]
40,073,915
https://en.wikipedia.org/wiki/Dyakonov%20surface%20wave
Dyakonov surface waves (DSWs) are surface electromagnetic waves that travel along the interface in between an isotropic and an uniaxial-birefringent medium. They were theoretically predicted in 1988 by the Russian physicist Mikhail Dyakonov. Unlike other types of acoustic and electromagnetic surface waves, the DSW's existence is due to the difference in symmetry of materials forming the interface. He considered the interface between an isotropic transmitting medium and an anisotropic uniaxial crystal, and showed that under certain conditions waves localized at the interface should exist. Later, similar waves were predicted to exist at the interface between two identical uniaxial crystals with different orientations. The previously known electromagnetic surface waves, surface plasmons and surface plasmon polaritons, exist under the condition that the permittivity of one of the materials forming the interface is negative, while the other one is positive (for example, this is the case for the air/metal interface below the plasma frequency). In contrast, the DSW can propagate when both materials are transparent; hence they are virtually lossless, which is their most fascinating property. In recent years, the significance and potential of the DSW have attracted the attention of many researchers: a change of the constitutive properties of one or both of the two partnering materials – due to, say, infiltration by any chemical or biological agent – could measurably change the characteristics of the wave. Consequently, numerous potential applications are envisaged, including devices for integrated optics, chemical and biological surface sensing, etc. However, it is not easy to satisfy the necessary conditions for the DSW, and because of this the first proof-of-principle experimental observation of DSW was reported only 20 years after the original prediction. A large number of theoretical work appeared dealing with various aspects of this phenomenon, see the detailed review. In particular, DSW propagation at magnetic interfaces, in left-handed materials, in electro-optical, and chiral materials was studied. Resonant transmission due to DSW in structures using prisms was predicted, and combination and interaction between DSW and surface plasmons (Dyakonov plasmons) was studied and observed. Physical properties The simplest configuration considered in Ref. 1 consists of an interface between an isotropic material with permittivity and a uniaxial crystal with permittivities and for the ordinary and the extraordinary waves respectively. The crystal C axis is parallel to the interface. For this configuration, the DSW can propagate along the interface within certain angular intervals with respect to the C axis, provided that the condition of is satisfied. Thus DSW are supported by interfaces with positive birefringent crystals only (). The angular interval is defined by the parameter . The angular intervals for the DSW phase and group velocities ( and ) are different. The phase velocity interval is proportional to and even for the most strongly birefringent natural crystals is very narrow (rutile) and (calomel). However the physically more important group velocity interval is substantially larger (proportional to ). Calculations give for rutile, and for calomel. Perspectives A widespread experimental investigation of DSW material systems and evolution of related practical devices has been largely limited by the stringent anisotropy conditions necessary for successful DSW propagation, particularly the high degree of birefringence of at least one of the constituent materials and the limited number of naturally available materials fulfilling this requirement. However, this is about to change in light of novel artificially engineered metamaterials and revolutionary material synthesis techniques. The extreme sensitivity of DSW to anisotropy, and thereby to stress, along with their low-loss (long-range) character render them particularly attractive for enabling high sensitivity tactile and ultrasonic sensing for next-generation high-speed transduction and read-out technologies. Moreover, the unique directionality of DSW can be used for the steering of optical signals. See also Dyakonov–Voigt wave Surface wave Leaky mode References Condensed matter physics Surface science Surface waves
Dyakonov surface wave
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
854
[ "Physical phenomena", "Surface waves", "Phases of matter", "Materials science", "Surface science", "Waves", "Condensed matter physics", "Matter" ]
40,074,737
https://en.wikipedia.org/wiki/Glycoprotein%20Ib-IX-V%20complex
The GPIb-IX-V complex is a profuse membrane receptor complex originating in megakaryocytes and exclusively functional on the surface of platelets. It primarily functions to mediate the first critical step in platelet adhesion, by facilitating binding to von Willebrand factor (VWF) on damaged sub-endothelium under conditions of high fluid shear stress. Although the primary ligand for the GPIb-V-IX receptor is VWF, it can also bind to a number of other ligands in the circulation such as thrombin, P-selectin, factor XI, factor XII, high molecular weight kininogen as well as bacteria. GPIb-IX-V offers a critical role in thrombosis, metastasis, and the life cycle of platelets, and is implicated in a number of thrombotic pathological processes such as stroke or myocardial infarction. Molecular structure Overview GPIb-IX-V consists of four different subunits namely: GPIbα (molecular weight (MW) 135 kDa), GPIbβ (MW 26 kDa), GPIX (MW 20 kDa) and GPV (MW 82kDa). The complex is assembled such that GPIbα, GPIbβ and GPIX form a highly integrated protein complex in a 1:2:1 stoichiometry; and this associates weakly with GPV resulting in an overall stoichiometric ratio of 1:1. Each subunit of the complex is a type I transmembrane (TM) protein which consists of a leucine-rich repeat (LRR) ectodomain (extracellular domain), a single transmembrane helix, and a relatively short cytoplasmic tail that lacks enzymatic activity. The quaternary stabilization of the receptor is facilitated by covalent and non-covalent interactions. The GPIbα subunit is linked to two GPIbβ subunits via membrane-proximal disulfide bonds, while GPIX associates itself tightly through non-covalent interactions with GPIb. The concomitant expression of all three subunits is required to allow the effective expression of GPIb-IX on the platelet cell surface and analysis of receptor expression in transfected Chinese hamster ovary (CHO) cells has further supported that the interaction between these subunits also acts to stabilize them. Each of the four subunits (GPIbα, GPIbβ, GPIX and GPV) is part of the leucine rich repeat motif superfamily. These leucine rich repeat sequences tend to be about 24 amino acids in length either occurring singly or in tandem repeats flanked by conserved N-terminal and C-terminal disulfide loop structures. Nevertheless, even though these structural similarities exist, distinctive genes that exist on different chromosomes of the genome code for the polypeptides that make up the GPIb-V-IX complex. The four genes that code for the components of the receptor in humans have a simple organization in which the coding sequence is contained within a single exon. This is with the exception of the gene for GPIbβ, which contains an intron 10 bases following the start codon. Human GPIbα is the product of a gene on chromosome 17 specifically 17p12, GPIbβ is the product of a gene on chromosome 22 specifically 22q11.2, while GPV and GPIX are products of genes found on chromosome 3 specifically 3q21 and 3q29 respectively. Under normal conditions, all four molecules are expressed exclusively in the platelet lineage. GPIbα, GPIbβ and GPIX are necessary for the effective biosynthesis of the receptor and are closely associated at the platelet membrane. Typically, a lack of a single subunit significantly decreases the surface expression of the entire receptor complex. GPIbα GPIbα (CD42b) consisting of 610 amino acids is the major subunit and contains all known extracellular ligand-binding sites of the complex for example: the A1 domain of von Willebrand factor (VWF) has a binding region as marked in the N-terminal domain of GPIbα; while the thrombin binding site is contained in a conformationally flexible acidic residue-rich sequence containing sulfated tyrosines. Dissection of the crystal structure of the GPIbα N-terminal leucine rich repeat domain discloses the presence of a single disulfide bond between cysteine (Cys) residues Cys4 and Cys17 in the N-capping region, and two disulfide bonds (Cys209-Cys248 and Cys211-Cys264) in the C-capping region. Furthermore, there are seven tandem leucine rich repeats and their flanking sequences in the central parallel β-coil region. This parallel β-coil region is made up of three sided coils stacked in layers and contains two asparagine residues (Asn21 and Asn159), which serve as N-glycosylation sites. Following the leucine rich repeat domain is the acidic residue-rich sequence containing sulfated tyrosines, the highly O-glycosylated macroglycopeptide, a stalk region of about 40 to 50 residues, a single transmembrane sequence and finally a cytoplasmic tail containing 96 amino acid residues which includes serine residues such as Ser587, Ser590 and Ser609 that can be phosphorylated. GPIbβ, GPIX, GPV GPIbβ (CD42c) contains 181 amino acids. In the extracellular domain (ectodomain), both the N-capping and C-capping regions, which flank the leucine rich repeat sequence, contain two interlocking disulfide bonds. Furthermore, there is only a single leucine-rich repeat giving rise to a much less curved parallel β-coil region as compared to that in GPIbα. GPIbβ contains only one N-glycosylation site (Asn41) and is disulfide linked to GPIbα immediately proximal to the plasma membrane of the platelet via Cys122 located at the junction of the extracellular and transmembrane domains. The GPIbβ cytoplasmic domain has a sequence of 34 amino acids. The region adjacent to the membrane is enriched in basic residues and Ser166 found more distally is phosphorylated and appears to have a role in platelet cytoskeletal rearrangement. GPIX (CD42a) contains 160 amino acids. The extracellular domain, which also only has a single leucine rich repeat sequence shares more than 45% sequence identity with GPIbβ counterpart. However, the transmembrane and cytoplasmic sequences are considerably different. The GPIX cytoplasmic tail is short consisting of 8 residues and is not known to associate with intracellular proteins. There is also a cysteine residue (Cys154) located at the junction of the transmembrane and cytoplasmic domains. The extracellular domain of GPV contains 13 leucine rich repeats flanked by N- and C- capping regions both containing two interlocking disulfide bonds. This is followed by a stalk region, the transmembrane sequence and a short cytoplasmic tail rich in basic residues. The GPV (CD42d) subunit is only weakly associated with the GPIb-IX part of the receptor complex through interactions between the transmembrane domains and has little impact on the surface expression of GPIb-IX, although GPIb-IX is required for efficient expression of GPV. Furthermore, GPV doesn’t appear to be critical for VWF binding or signal transduction. Role in disease Abnormalities of the GPIb-V-IX complex result in abnormal appearance and functioning of platelets resulting in Bernard–Soulier syndrome (BSS), a condition first described by Bernard J and Soulier J.P. It is a rare hereditary bleeding disorder most commonly with an autosomal recessive inheritance and diagnosed based on prolonged skin-bleeding time, a reduced number of very large platelets (macrothrombocytopenia) and defective ristocetin-induced platelet agglutination. Bernard Soulier Syndrome is characterized by little or no expression of GPIb-IX on the surface of platelets which in turn has the same effect on GPV. There have been a number of mutations associated with BSS patients that have been mapped to GPIbα, GPIbβ and GPIX demonstrating that all three subunits are required for effective surface expression of the complex on platelets. References Glycoproteins Transmembrane receptors
Glycoprotein Ib-IX-V complex
[ "Chemistry" ]
1,822
[ "Transmembrane receptors", "Glycobiology", "Glycoproteins", "Signal transduction" ]
40,076,007
https://en.wikipedia.org/wiki/Sandi%20Klav%C5%BEar
Sandi Klavžar (born 5 February 1962) is a Slovenian mathematician working in the area of graph theory and its applications. He is a professor of mathematics at the University of Ljubljana. Education Klavžar received his Ph.D. from the University of Ljubljana in 1990, under the supervision of Wilfried Imrich and Tomaž Pisanski. Research Klavžar's research concerns graph products, metric graph theory, chemical graph theory, graph domination, and the Tower of Hanoi. Together with Wilfried Imrich and Richard Hammack, he is the author of the book Handbook of Product Graphs (CRC Press, 2011). Together with Andreas M. Hinz, Uroš Milutinović, and Ciril Petr, he is the author of the book The Tower of Hanoi – Myths and Maths (Springer, Basel, 2013). Awards and honors In 2007, Klavžar received the Zois award for exceptional contributions to science and mathematics. References External links Home page at the University of Ljubljana Living people 20th-century Slovenian mathematicians Graph theorists Mathematical chemistry University of Ljubljana alumni Academic staff of the University of Ljubljana 1962 births 21st-century Slovenian mathematicians
Sandi Klavžar
[ "Chemistry", "Mathematics" ]
239
[ "Drug discovery", "Applied mathematics", "Graph theory", "Theoretical chemistry", "Mathematical chemistry", "Molecular modelling", "Mathematical relations", "Graph theorists" ]
40,076,902
https://en.wikipedia.org/wiki/Presymplectic%20form
In geometric mechanics a presymplectic form is a closed differential 2-form of constant rank on a manifold. However, some authors use different definitions. Recently, Hajduk and Walczak defined a presymplectic form as a closed differential 2-form of maximal rank on a manifold of odd dimension. A symplectic form is a presymplectic form that is also nondegenerate. Lack of nondegeneracy, leading to presymplectic forms, occurs in dynamical systems with singular Lagrangians, Hamiltonian systems with constraints and control theory. References Dynamical systems Differential geometry
Presymplectic form
[ "Physics", "Mathematics" ]
129
[ "Mechanics", "Dynamical systems" ]
3,946,232
https://en.wikipedia.org/wiki/Fundamental%20vector%20field
In the study of mathematics, and especially of differential geometry, fundamental vector fields are instruments that describe the infinitesimal behaviour of a smooth Lie group action on a smooth manifold. Such vector fields find important applications in the study of Lie theory, symplectic geometry, and the study of Hamiltonian group actions. Motivation Important to applications in mathematics and physics is the notion of a flow on a manifold. In particular, if is a smooth manifold and is a smooth vector field, one is interested in finding integral curves to . More precisely, given one is interested in curves such that: for which local solutions are guaranteed by the Existence and Uniqueness Theorem of Ordinary Differential Equations. If is furthermore a complete vector field, then the flow of , defined as the collection of all integral curves for , is a diffeomorphism of . The flow given by is in fact an action of the additive Lie group on . Conversely, every smooth action defines a complete vector field via the equation: It is then a simple result that there is a bijective correspondence between actions on and complete vector fields on . In the language of flow theory, the vector field is called the infinitesimal generator. Intuitively, the behaviour of the flow at each point corresponds to the "direction" indicated by the vector field. It is a natural question to ask whether one may establish a similar correspondence between vector fields and more arbitrary Lie group actions on . Definition Let be a Lie group with corresponding Lie algebra . Furthermore, let be a smooth manifold endowed with a smooth action . Denote the map such that , called the orbit map of corresponding to . For , the fundamental vector field corresponding to is any of the following equivalent definitions: where is the differential of a smooth map and is the zero vector in the vector space . The map can then be shown to be a Lie algebra homomorphism. Applications Lie groups The Lie algebra of a Lie group may be identified with either the left- or right-invariant vector fields on . It is a well-known result that such vector fields are isomorphic to , the tangent space at identity. In fact, if we let act on itself via right-multiplication, the corresponding fundamental vector fields are precisely the left-invariant vector fields. Hamiltonian group actions In the motivation, it was shown that there is a bijective correspondence between smooth actions and complete vector fields. Similarly, there is a bijective correspondence between symplectic actions (the induced diffeomorphisms are all symplectomorphisms) and complete symplectic vector fields. A closely related idea is that of Hamiltonian vector fields. Given a symplectic manifold , we say that is a Hamiltonian vector field if there exists a smooth function satisfying where the map is the interior product. This motivates the definition of a Hamiltonian group action as follows: If is a Lie group with Lie algebra and is a group action of on a smooth manifold , then we say that is a Hamiltonian group action if there exists a moment map such that for each: , where and is the fundamental vector field of . References Lie groups Symplectic geometry Hamiltonian mechanics Smooth manifolds
Fundamental vector field
[ "Physics", "Mathematics" ]
638
[ "Lie groups", "Mathematical structures", "Theoretical physics", "Classical mechanics", "Hamiltonian mechanics", "Algebraic structures", "Dynamical systems" ]
3,947,316
https://en.wikipedia.org/wiki/Non-stoichiometric%20compound
Non-stoichiometric compounds are chemical compounds, almost always solid inorganic compounds, having elemental composition whose proportions cannot be represented by a ratio of small natural numbers (i.e. an empirical formula); most often, in such materials, some small percentage of atoms are missing or too many atoms are packed into an otherwise perfect lattice work. Contrary to earlier definitions, modern understanding of non-stoichiometric compounds view them as homogeneous, and not mixtures of stoichiometric chemical compounds. Since the solids are overall electrically neutral, the defect is compensated by a change in the charge of other atoms in the solid, either by changing their oxidation state, or by replacing them with atoms of different elements with a different charge. Many metal oxides and sulfides have non-stoichiometric examples; for example, stoichiometric iron(II) oxide, which is rare, has the formula , whereas the more common material is nonstoichiometric, with the formula . The type of equilibrium defects in non-stoichiometric compounds can vary with attendant variation in bulk properties of the material. Non-stoichiometric compounds also exhibit special electrical or chemical properties because of the defects; for example, when atoms are missing, electrons can move through the solid more rapidly. Non-stoichiometric compounds have applications in ceramic and superconductive material and in electrochemical (i.e., battery) system designs. Occurrence Iron oxides Nonstoichiometry is pervasive for metal oxides, especially when the metal is not in its highest oxidation state. For example, although wüstite (ferrous oxide) has an ideal (stoichiometric) formula , the actual stoichiometry is closer to . The non-stoichiometry reflect the ease of oxidation of to effectively replacing a small portion of with two thirds their number of . Thus for every three "missing" ions, the crystal contains two ions to balance the charge. The composition of a non-stoichiometric compound usually varies in a continuous manner over a narrow range. Thus, the formula for wüstite is written as , where x is a small number (0.05 in the previous example) representing the deviation from the "ideal" formula. Nonstoichiometry is especially important in solid, three-dimensional polymers that can tolerate mistakes. To some extent, entropy drives all solids to be non-stoichiometric. But for practical purposes, the term describes materials where the non-stoichiometry is measurable, usually at least 1% of the ideal composition. Iron sulfides The monosulfides of the transition metals are often nonstoichiometric. Best known perhaps is nominally iron(II) sulfide (the mineral pyrrhotite) with a composition (x = 0 to 0.2). The rare stoichiometric endmember is known as the mineral troilite. Pyrrhotite is remarkable in that it has numerous polytypes, i.e. crystalline forms differing in symmetry (monoclinic or hexagonal) and composition (, , and others). These materials are always iron-deficient owing to the presence of lattice defects, namely iron vacancies. Despite those defects, the composition is usually expressed as a ratio of large numbers and the crystals symmetry is relatively high. This means the iron vacancies are not randomly scattered over the crystal, but form certain regular configurations. Those vacancies strongly affect the magnetic properties of pyrrhotite: the magnetism increases with the concentration of vacancies and is absent for the stoichiometric . Palladium hydrides Palladium hydride is a nonstoichiometric material of the approximate composition (0.02 < x < 0.58). This solid conducts hydrogen by virtue of the mobility of the hydrogen atoms within the solid. Tungsten oxides It is sometimes difficult to determine if a material is non-stoichiometric or if the formula is best represented by large numbers. The oxides of tungsten illustrate this situation. Starting from the idealized material tungsten trioxide, one can generate a series of related materials that are slightly deficient in oxygen. These oxygen-deficient species can be described as , but in fact they are stoichiometric species with large unit cells with the formulas , where n = 20, 24, 25, 40. Thus, the last species can be described with the stoichiometric formula , whereas the non-stoichiometric description implies a more random distribution of oxide vacancies. Other cases At high temperatures (1000 °C), titanium sulfides present a series of non-stoichiometric compounds. The coordination polymer Prussian blue, nominally and their analogs are well known to form in non-stoichiometric proportions. The non-stoichiometric phases exhibit useful properties vis-à-vis their ability to bind caesium and thallium ions. Applications Oxidation catalysis Many useful compounds are produced by the reactions of hydrocarbons with oxygen, a conversion that is catalyzed by metal oxides. The process operates via the transfer of "lattice" oxygen to the hydrocarbon substrate, a step that temporarily generates a vacancy (or defect). In a subsequent step, the missing oxygen is replenished by O2. Such catalysts rely on the ability of the metal oxide to form phases that are not stoichiometric. An analogous sequence of events describes other kinds of atom-transfer reactions including hydrogenation and hydrodesulfurization catalysed by solid catalysts. These considerations also highlight the fact that stoichiometry is determined by the interior of crystals: the surfaces of crystals often do not follow the stoichiometry of the bulk. The complex structures on surfaces are described by the term "surface reconstruction". Ion conduction The migration of atoms within a solid is strongly influenced by the defects associated with non-stoichiometry. These defect sites provide pathways for atoms and ions to migrate through the otherwise dense ensemble of atoms that form the crystals. Oxygen sensors and solid state batteries are two applications that rely on oxide vacancies. One example is the CeO2-based sensor in automotive exhaust systems. At low partial pressures of O2, the sensor allows the introduction of increased air to effect more thorough combustion. Superconductivity Many superconductors are non-stoichiometric. For example, yttrium barium copper oxide, arguably the most notable high-temperature superconductor, is a non-stoichiometric solid with the formula YxBa2Cu3O7−x. The critical temperature of the superconductor depends on the exact value of x. The stoichiometric species has x = 0, but this value can be as great as 1. History It was mainly through the work of Nikolai Semenovich Kurnakov and his students that Berthollet's opposition to Proust's law was shown to have merit for many solid compounds. Kurnakov divided non-stoichiometric compounds into berthollides and daltonides depending on whether their properties showed monotonic behavior with respect to composition or not. The term berthollide was accepted by IUPAC in 1960. The names come from Claude Louis Berthollet and John Dalton, respectively, who in the 19th century advocated rival theories of the composition of substances. Although Dalton "won" for the most part, it was later recognized that the law of definite proportions had important exceptions. See also F-Center Vacancy defect References Further reading F. Albert Cotton, Geoffrey Wilkinson, Carlos A. Murillo & Manfred Bochmann, 1999, Advanced Inorganic Chemistry, 6th Edn., pp. 202, 271, 316, 777, 888. 897, and 1145, New York, NY, USA:Wiley-Interscience, , see , accessed 8 July 2015. Roland Ward, 1963, Nonstoichiometric Compounds, Advances in Chemistry series, Vol. 39, Washington, DC, USA: American Chemical Society, , DOI 10.1021/ba-1964-0039, see , accessed 8 July 2015. J. S. Anderson, 1963, "Current problems in nonstoichiometry (Ch. 1)," in Nonstoichiometric Compounds (Roland Ward, Ed.), pp. 1–22, Advances in Chemistry series, Vol. 39, Washington, DC, USA: American Chemical Society, , DOI 10.1021/ba-1964-0039.ch001, see , accessed 8 July 2015. Solid-state chemistry Inorganic chemistry Non-stoichiometric compounds General chemistry
Non-stoichiometric compound
[ "Physics", "Chemistry", "Materials_science" ]
1,794
[ "Non-stoichiometric compounds", "Condensed matter physics", "nan", "Solid-state chemistry" ]
3,948,656
https://en.wikipedia.org/wiki/Center%20manifold
In the mathematics of evolving systems, the concept of a center manifold was originally developed to determine stability of degenerate equilibria. Subsequently, the concept of center manifolds was realised to be fundamental to mathematical modelling. Center manifolds play an important role in bifurcation theory because interesting behavior takes place on the center manifold and in multiscale mathematics because the long time dynamics of the micro-scale often are attracted to a relatively simple center manifold involving the coarse scale variables. Informal description Saturn's rings capture much center-manifold geometry. Dust particles in the rings are subject to tidal forces, which act characteristically to "compress and stretch". The forces compress particle orbits into the rings, stretch particles along the rings, and ignore small shifts in ring radius. The compressing direction defines the stable manifold, the stretching direction defining the unstable manifold, and the neutral direction is the center manifold. While geometrically accurate, one major difference distinguishes Saturn's rings from a physical center manifold. Like most dynamical systems, particles in the rings are governed by second-order laws. Understanding trajectories requires modeling position and a velocity/momentum variable, to give a tangent manifold structure called phase space. Physically speaking, the stable, unstable and neutral manifolds of Saturn's ring system do not divide up the coordinate space for a particle's position; they analogously divide up phase space instead. The center manifold typically behaves as an extended collection of saddle points. Some position-velocity pairs are driven towards the center manifold, while others are flung away from it. Small perturbations that generally push them about randomly, and often push them out of the center manifold. There are, however, dramatic counterexamples to instability at the center manifold, called Lagrangian coherent structures. The entire unforced rigid body dynamics of a ball is a center manifold. A much more sophisticated example is the Anosov flow on tangent bundles of Riemann surfaces. In that case, the tangent space splits very explicitly and precisely into three parts: the unstable and stable bundles, with the neutral manifold wedged between. Definition The center manifold of a dynamical system is based upon an equilibrium point of that system. A center manifold of the equilibrium then consists of those nearby orbits that neither decay nor grow exponentially quickly. Mathematically, the first step when studying equilibrium points of dynamical systems is to linearize the system, and then compute its eigenvalues and eigenvectors. The eigenvectors (and generalized eigenvectors if they occur) corresponding to eigenvalues with negative real part form a basis for the stable eigenspace. The (generalized) eigenvectors corresponding to eigenvalues with positive real part form the unstable eigenspace. Algebraically, let be a dynamical system, linearized about equilibrium point . The Jacobian matrix defines three main subspaces: the center subspace, spanned by generalized eigenvectors whose eigenvalues satisfy (more generally, ); the stable subspace, spanned by generalized eigenvectors whose eigenvalues satisfy (more generally, ); the unstable subspace, spanned by the generalized eigenvectors whose eigenvalues satisfy (more generally, ). Depending upon the application, other invariant subspaces of the linearized equation may be of interest, including center-stable, center-unstable, sub-center, slow, and fast subspaces. If the equilibrium point is hyperbolic (that is, all eigenvalues of the linearization have nonzero real part), then the Hartman-Grobman theorem guarantees that these eigenvalues and eigenvectors completely characterise the system's dynamics near the equilibrium. However, if the equilibrium has eigenvalues whose real part is zero, then the corresponding (generalized) eigenvectors form the center eigenspace. Going beyond the linearization, when we account for perturbations by nonlinearity or forcing in the dynamical system, the center eigenspace deforms to the nearby center manifold. If the eigenvalues are precisely zero (as they are for the ball), rather than just real-part being zero, then the corresponding eigenspace more specifically gives rise to a slow manifold. The behavior on the center (slow) manifold is generally not determined by the linearization and thus may be difficult to construct. Analogously, nonlinearity or forcing in the system perturbs the stable and unstable eigenspaces to a nearby stable manifold and nearby unstable manifold. These three types of manifolds are three cases of an invariant manifold. Corresponding to the linearized system, the nonlinear system has invariant manifolds, each consisting of sets of orbits of the nonlinear system. An invariant manifold tangent to the stable subspace and with the same dimension is the stable manifold. The unstable manifold is of the same dimension and tangent to the unstable subspace. A center manifold is of the same dimension and tangent to the center subspace. If, as is common, the eigenvalues of the center subspace are all precisely zero, rather than just real part zero, then a center manifold is often called a slow manifold. Center manifold theorems The center manifold existence theorem states that if the right-hand side function is ( times continuously differentiable), then at every equilibrium point there exists a neighborhood of some finite size in which there is at least one of a unique stable manifold, a unique unstable manifold, and a (not necessarily unique) center manifold. In example applications, a nonlinear coordinate transform to a normal form can clearly separate these three manifolds. In the case when the unstable manifold does not exist, center manifolds are often relevant to modelling. The center manifold emergence theorem then says that the neighborhood may be chosen so that all solutions of the system staying in the neighborhood tend exponentially quickly to some solution on the center manifold; in formulas, for some rate . This theorem asserts that for a wide variety of initial conditions the solutions of the full system decay exponentially quickly to a solution on the relatively low dimensional center manifold. A third theorem, the approximation theorem, asserts that if an approximate expression for such invariant manifolds, say , satisfies the differential equation for the system to residuals as , then the invariant manifold is approximated by to an error of the same order, namely . Center manifolds of infinite-dimensional or non-autonomous systems However, some applications, such as to dispersion in tubes or channels, require an infinite-dimensional center manifold. The most general and powerful theory was developed by Aulbach and Wanner. They addressed non-autonomous dynamical systems in infinite dimensions, with potentially infinite dimensional stable, unstable and center manifolds. Further, they usefully generalised the definition of the manifolds so that the center manifold is associated with eigenvalues such that , the stable manifold with eigenvalues , and unstable manifold with eigenvalues . They proved existence of these manifolds, and the emergence of a center manifold, via nonlinear coordinate transforms. Potzsche and Rasmussen established a corresponding approximation theorem for such infinite dimensional, non-autonomous systems. Alternative backwards theory All the extant theory mentioned above seeks to establish invariant manifold properties of a specific given problem. In particular, one constructs a manifold that approximates an invariant manifold of the given system. An alternative approach is to construct exact invariant manifolds for a system that approximates the given system---called a backwards theory. The aim is to usefully apply theory to a wider range of systems, and to estimate errors and sizes of domain of validity. This approach is cognate to the well-established backward error analysis in numerical modeling. Center manifold and the analysis of nonlinear systems As the stability of the equilibrium correlates with the "stability" of its manifolds, the existence of a center manifold brings up the question about the dynamics on the center manifold. This is analyzed by the center manifold reduction, which, in combination with some system parameter μ, leads to the concepts of bifurcations. Examples The Wikipedia entry on slow manifolds gives more examples. A simple example Consider the system The unstable manifold at the origin is the y axis, and the stable manifold is the trivial set {(0, 0)}. Any orbit not on the stable manifold satisfies an equation of the form for some real constant A. It follows that for any real A, we can create a center manifold by piecing together the curve for x > 0 with the negative x axis (including the origin). Moreover, all center manifolds have this potential non-uniqueness, although often the non-uniqueness only occurs in unphysical complex values of the variables. Delay differential equations often have Hopf bifurcations Another example shows how a center manifold models the Hopf bifurcation that occurs for parameter in the delay differential equation . Strictly, the delay makes this DE infinite-dimensional. Fortunately, we may approximate such delays by the following trick that keeps the dimensionality finite. Define and approximate the time-delayed variable, , by using the intermediaries and . For parameter near critical, , the delay differential equation is then approximated by the system In terms of a complex amplitude and its complex conjugate , the center manifold is and the evolution on the center manifold is This evolution shows the origin is linearly unstable for , but the cubic nonlinearity then stabilises nearby limit cycles as in classic Hopf bifurcation. See also Invariant manifold Stable manifold Lagrangian coherent structure Normally hyperbolic invariant manifold Notes Further reading . External links Online web services to extract center manifolds from a specified system via computer algebra: A simple service to construct center manifolds for autonomous systems A more complicated service to convert a specified ODE system to a normal form Dynamical systems
Center manifold
[ "Physics", "Mathematics" ]
2,011
[ "Mechanics", "Dynamical systems" ]
3,948,734
https://en.wikipedia.org/wiki/Homoclinic%20orbit
In the study of dynamical systems, a homoclinic orbit is a path through phase space which joins a saddle equilibrium point to itself. More precisely, a homoclinic orbit lies in the intersection of the stable manifold and the unstable manifold of an equilibrium. It is a heteroclinic orbit–a path between any two equilibrium points–in which the endpoints are one and the same. Consider the continuous dynamical system described by the ordinary differential equation Suppose there is an equilibrium at , then a solution is a homoclinic orbit if If the phase space has three or more dimensions, then it is important to consider the topology of the unstable manifold of the saddle point. The figures show two cases. First, when the stable manifold is topologically a cylinder, and secondly, when the unstable manifold is topologically a Möbius strip; in this case the homoclinic orbit is called twisted. Discrete dynamical system Homoclinic orbits and homoclinic points are defined in the same way for iterated functions, as the intersection of the stable set and unstable set of some fixed point or periodic point of the system. We also have the notion of homoclinic orbit when considering discrete dynamical systems. In such a case, if is a diffeomorphism of a manifold , we say that is a homoclinic point if it has the same past and future - more specifically, if there exists a fixed (or periodic) point such that Properties The existence of one homoclinic point implies the existence of an infinite number of them. This comes from its definition: the intersection of a stable and unstable set. Both sets are invariant by definition, which means that the forward iteration of the homoclinic point is both on the stable and unstable set. By iterating N times, the map approaches the equilibrium point by the stable set, but in every iteration it is on the unstable manifold too, which shows this property. This property suggests that complicated dynamics arise by the existence of a homoclinic point. Indeed, Smale (1967) showed that these points leads to horseshoe map like dynamics, which is associated with chaos. Symbolic dynamics By using the Markov partition, the long-time behaviour of a hyperbolic system can be studied using the techniques of symbolic dynamics. In this case, a homoclinic orbit has a particularly simple and clear representation. Suppose that is a finite set of M symbols. The dynamics of a point x is then represented by a bi-infinite string of symbols A periodic point of the system is simply a recurring sequence of letters. A heteroclinic orbit is then the joining of two distinct periodic orbits. It may be written as where is a sequence of symbols of length k, (of course, ), and is another sequence of symbols, of length m (likewise, ). The notation simply denotes the repetition of p an infinite number of times. Thus, a heteroclinic orbit can be understood as the transition from one periodic orbit to another. By contrast, a homoclinic orbit can be written as with the intermediate sequence being non-empty, and, of course, not being p, as otherwise, the orbit would simply be . See also Heteroclinic orbit Homoclinic bifurcation References John Guckenheimer and Philip Holmes, Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields (Applied Mathematical Sciences Vol. 42), Springer External links Homoclinic orbits in Henon map with Java applets and comments Dynamical systems
Homoclinic orbit
[ "Physics", "Mathematics" ]
734
[ "Mechanics", "Dynamical systems" ]
3,948,758
https://en.wikipedia.org/wiki/Heteroclinic%20orbit
In mathematics, in the phase portrait of a dynamical system, a heteroclinic orbit (sometimes called a heteroclinic connection) is a path in phase space which joins two different equilibrium points. If the equilibrium points at the start and end of the orbit are the same, the orbit is a homoclinic orbit. Consider the continuous dynamical system described by the ordinary differential equation Suppose there are equilibria at Then a solution is a heteroclinic orbit from to if both limits are satisfied: This implies that the orbit is contained in the stable manifold of and the unstable manifold of . Symbolic dynamics By using the Markov partition, the long-time behaviour of hyperbolic system can be studied using the techniques of symbolic dynamics. In this case, a heteroclinic orbit has a particularly simple and clear representation. Suppose that is a finite set of M symbols. The dynamics of a point x is then represented by a bi-infinite string of symbols A periodic point of the system is simply a recurring sequence of letters. A heteroclinic orbit is then the joining of two distinct periodic orbits. It may be written as where is a sequence of symbols of length k, (of course, ), and is another sequence of symbols, of length m (likewise, ). The notation simply denotes the repetition of p an infinite number of times. Thus, a heteroclinic orbit can be understood as the transition from one periodic orbit to another. By contrast, a homoclinic orbit can be written as with the intermediate sequence being non-empty, and, of course, not being p, as otherwise, the orbit would simply be . See also Heteroclinic connection Heteroclinic cycle Heteroclinic bifurcation Homoclinic orbit Traveling wave References John Guckenheimer and Philip Holmes, Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields, (Applied Mathematical Sciences Vol. 42), Springer Dynamical systems
Heteroclinic orbit
[ "Physics", "Mathematics" ]
416
[ "Mechanics", "Dynamical systems" ]
3,949,010
https://en.wikipedia.org/wiki/Cyclic%20compound
A cyclic compound (or ring compound) is a term for a compound in the field of chemistry in which one or more series of atoms in the compound is connected to form a ring. Rings may vary in size from three to many atoms, and include examples where all the atoms are carbon (i.e., are carbocycles), none of the atoms are carbon (inorganic cyclic compounds), or where both carbon and non-carbon atoms are present (heterocyclic compounds with rings containing both carbon and non-carbon). Depending on the ring size, the bond order of the individual links between ring atoms, and their arrangements within the rings, carbocyclic and heterocyclic compounds may be aromatic or non-aromatic; in the latter case, they may vary from being fully saturated to having varying numbers of multiple bonds between the ring atoms. Because of the tremendous diversity allowed, in combination, by the valences of common atoms and their ability to form rings, the number of possible cyclic structures, even of small size (e.g., < 17 total atoms) numbers in the many billions. Adding to their complexity and number, closing of atoms into rings may lock particular atoms with distinct substitution (by functional groups) such that stereochemistry and chirality of the compound results, including some manifestations that are unique to rings (e.g., configurational isomers). As well, depending on ring size, the three-dimensional shapes of particular cyclic structures – typically rings of five atoms and larger – can vary and interconvert such that conformational isomerism is displayed. Indeed, the development of this important chemical concept arose historically in reference to cyclic compounds. Finally, cyclic compounds, because of the unique shapes, reactivities, properties, and bioactivities that they engender, are the majority of all molecules involved in the biochemistry, structure, and function of living organisms, and in man-made molecules such as drugs, pesticides, etc. Structure and classification A cyclic compound or ring compound is a compound in which at least some its atoms are connected to form a ring. Rings vary in size from three to many tens or even hundreds of atoms. Examples of ring compounds readily include cases where: all the atoms are carbon (i.e., are carbocycles), none of the atoms are carbon (inorganic cyclic compounds), or where both carbon and non-carbon atoms are present (heterocyclic compounds with rings containing both carbon and non-carbon). Common atoms can (as a result of their valences) form varying numbers of bonds, and many common atoms readily form rings. In addition, depending on the ring size, the bond order of the individual links between ring atoms, and their arrangements within the rings, cyclic compounds may be aromatic or non-aromatic; in the case of non-aromatic cyclic compounds, they may vary from being fully saturated to having varying numbers of multiple bonds. As a consequence of the constitutional variability that is thermodynamically possible in cyclic structures, the number of possible cyclic structures, even of small size (e.g., <17 atoms) numbers in the many billions. Moreover, the closing of atoms into rings may lock particular functional group–substituted atoms into place, resulting in stereochemistry and chirality being associated with the compound, including some manifestations that are unique to rings (e.g., configurational isomers); As well, depending on ring size, the three-dimensional shapes of particular cyclic structures — typically rings of five atoms and larger — can vary and interconvert such that conformational isomerism is displayed. Carbocycles The vast majority of cyclic compounds are organic, and of these, a significant and conceptually important portion are composed of rings made only of carbon atoms (i.e., they are carbocycles). Inorganic cyclic compounds Inorganic atoms form cyclic compounds as well. Examples include sulfur and nitrogen (e.g. heptasulfur imide , trithiazyl trichloride , tetrasulfur tetranitride ), silicon (e.g., cyclopentasilane ), phosphorus and nitrogen (e.g., hexachlorophosphazene ), phosphorus and oxygen (e.g., metaphosphates and other cyclic phosphoric acid derivatives), boron and oxygen (e.g., sodium metaborate , borax), boron and nitrogen (e.g. borazine ). When carbon in benzene is "replaced" by other elements, e.g., as in borabenzene, silabenzene, germanabenzene, stannabenzene, and phosphorine, aromaticity is retained, and so aromatic inorganic cyclic compounds are also known and well-characterized. Heterocyclic compounds A heterocyclic compound is a cyclic compound that has atoms of at least two different elements as members of its ring(s). Cyclic compounds that have both carbon and non-carbon atoms present are heterocyclic carbon compounds, and the name refers to inorganic cyclic compounds as well (e.g., siloxanes, which contain only silicon and oxygen in the rings, and borazines, which contain only boron and nitrogen in the rings). Hantzsch–Widman nomenclature is recommended by the IUPAC for naming heterocycles, but many common names remain in regular use. Macrocycles The term macrocycle is used for compounds having a rings of 8 or more atoms. Macrocycles may be fully carbocyclic (rings containing only carbon atoms, e.g. cyclooctane), heterocyclic containing both carbon and non-carbon atoms (e.g. lactones and lactams containing rings of 8 or more atoms), or non-carbon (containing only non-carbon atoms in the rings, e.g. diselenium hexasulfide). Heterocycles with carbon in the rings may have limited non-carbon atoms in their rings (e.g., in lactones and lactams whose rings are rich in carbon but have limited number of non-carbon atoms), or be rich in non-carbon atoms and displaying significant symmetry (e.g., in the case of chelating macrocycles). Macrocycles can access a number of stable conformations, with preference to reside in conformations that minimize transannular nonbonded interactions within the ring (e.g., with the chair and chair-boat being more stable than the boat-boat conformation for cyclooctane, because of the interactions depicted by the arcs shown). Medium rings (8-11 atoms) are the most strained, with between 9-13 (kcal/mol) strain energy, and analysis of factors important in the conformations of larger macrocycles can be modeled using medium ring conformations. Conformational analysis of odd-membered rings suggests they tend to reside in less symmetrical forms with smaller energy differences between stable conformations. Nomenclature IUPAC nomenclature has extensive rules to cover the naming of cyclic structures, both as core structures, and as substituents appended to alicyclic structures. The term macrocycle is used when a ring-containing compound has a ring of 12 or more atoms. The term polycyclic is used when more than one ring appears in a single molecule. Naphthalene is formally a polycyclic compound, but is more specifically named as a bicyclic compound. Several examples of macrocyclic and polycyclic structures are given in the final gallery below. The atoms that are part of the ring structure are called annular atoms. Isomerism Stereochemistry The closing of atoms into rings may lock particular atoms with distinct substitution by functional groups such that the result is stereochemistry and chirality of the compound, including some manifestations that are unique to rings (e.g., configurational isomers). Conformational isomerism Depending on ring size, the three-dimensional shapes of particular cyclic structures—typically rings of 5-atoms and larger—can vary and interconvert such that conformational isomerism is displayed. Indeed, the development of this important chemical concept arose, historically, in reference to cyclic compounds. For instance, cyclohexanes—six membered carbocycles with no double bonds, to which various substituents might be attached, see image—display an equilibrium between two conformations, the chair and the boat, as shown in the image. The chair conformation is the favored configuration, because in this conformation, the steric strain, eclipsing strain, and angle strain that are otherwise possible are minimized. Which of the possible chair conformations predominate in cyclohexanes bearing one or more substituents depends on the substituents, and where they are located on the ring; generally, "bulky" substituents—those groups with large volumes, or groups that are otherwise repulsive in their interactions—prefer to occupy an equatorial location. An example of interactions within a molecule that would lead to steric strain, leading to a shift in equilibrium from boat to chair, is the interaction between the two methyl groups in cis-1,4-dimethylcyclohexane. In this molecule, the two methyl groups are in opposing positions of the ring (1,4-), and their cis stereochemistry projects both of these groups toward the same side of the ring. Hence, if forced into the higher energy boat form, these methyl groups are in steric contact, repel one another, and drive the equilibrium toward the chair conformation. Aromaticity Cyclic compounds may or may not exhibit aromaticity; benzene is an example of an aromatic cyclic compound, while cyclohexane is non-aromatic. In organic chemistry, the term aromaticity is used to describe a cyclic (ring-shaped), planar (flat) molecule that exhibits unusual stability as compared to other geometric or connective arrangements of the same set of atoms. As a result of their stability, it is very difficult to cause aromatic molecules to break apart and to react with other substances. Organic compounds that are not aromatic are classified as aliphatic compounds—they might be cyclic, but only aromatic rings have especial stability (low reactivity). Since one of the most commonly encountered aromatic systems of compounds in organic chemistry is based on derivatives of the prototypical aromatic compound benzene (an aromatic hydrocarbon common in petroleum and its distillates), the word “aromatic” is occasionally used to refer informally to benzene derivatives, and this is how it was first defined. Nevertheless, many non-benzene aromatic compounds exist. In living organisms, for example, the most common aromatic rings are the double-ringed bases in RNA and DNA. A functional group or other substituent that is aromatic is called an aryl group. The earliest use of the term “aromatic” was in an article by August Wilhelm Hofmann in 1855. Hofmann used the term for a class of benzene compounds, many of which do have odors (aromas), unlike pure saturated hydrocarbons. Today, there is no general relationship between aromaticity as a chemical property and the olfactory properties of such compounds (how they smell), although in 1855, before the structure of benzene or organic compounds was understood, chemists like Hofmann were beginning to understand that odiferous molecules from plants, such as terpenes, had chemical properties we recognize today are similar to unsaturated petroleum hydrocarbons like benzene. In terms of the electronic nature of the molecule, aromaticity describes a conjugated system often made of alternating single and double bonds in a ring. This configuration allows for the electrons in the molecule's pi system to be delocalized around the ring, increasing the molecule's stability. The molecule cannot be represented by one structure, but rather a resonance hybrid of different structures, such as with the two resonance structures of benzene. These molecules cannot be found in either one of these representations, with the longer single bonds in one location and the shorter double bond in another (See Theory below). Rather, the molecule exhibits bond lengths in between those of single and double bonds. This commonly seen model of aromatic rings, namely the idea that benzene was formed from a six-membered carbon ring with alternating single and double bonds (cyclohexatriene), was developed by August Kekulé (see History section below). The model for benzene consists of two resonance forms, which corresponds to the double and single bonds superimposing to produce six one-and-a-half bonds. Benzene is a more stable molecule than would be expected without accounting for charge delocalization. Principal uses Because of the unique shapes, reactivities, properties, and bioactivities that they engender, cyclic compounds are the largest majority of all molecules involved in the biochemistry, structure, and function of living organisms, and in the man-made molecules (e.g., drugs, herbicides, etc.) through which man attempts to exert control over nature and biological systems. Synthetic reactions Important general reactions for forming rings There are a variety of specialized reactions whose use is solely the formation of rings, and these will be discussed below. In addition to those, there are a wide variety of general organic reactions that historically have been crucial in the development, first, of understanding the concepts of ring chemistry, and second, of reliable procedures for preparing ring structures in high yield, and with defined orientation of ring substituents (i.e., defined stereochemistry). These general reactions include: Acyloin condensation; Anodic oxidations; and the Dieckmann condensation as applied to ring formation. Ring-closing reactions In organic chemistry, a variety of synthetic procedures are particularly useful in closing carbocyclic and other rings; these are termed ring-closing reactions. Examples include: alkyne trimerisation; the Bergman cyclization of an enediyne; the Diels–Alder, between a conjugated diene and a substituted alkene, and other cycloaddition reactions; the Nazarov cyclization reaction, originally being the cyclization of a divinyl ketone; various radical cyclizations; ring-closing metathesis reactions, which also can be used to accomplish a specific type of polymerization; the Ruzicka large ring synthesis, in which two carboxyl groups combine to form a carbonyl group with loss of and ; the Wenker synthesis converting a beta amino alcohol to an aziridine Ring-opening reactions A variety of further synthetic procedures are particularly useful in opening carbocyclic and other rings, generally which contain a double bound or other functional group "handle" to facilitate chemistry; these are termed ring-opening reactions. Examples include: ring opening metathesis, which can also be used to accomplish a specific type of polymerization. Ring expansion and ring contraction reactions Ring expansion and contraction reactions are common in organic synthesis, and are frequently encountered in pericyclic reactions. Ring expansions and contractions can involve the insertion of a functional group such as the case with Baeyer–Villiger oxidation of cyclic ketones, rearrangements of cyclic carbocycles as seen in intramolecular Diels-Alder reactions, or collapse or rearrangement of bicyclic compounds as several examples. Examples Simple, mono-cyclic examples The following are examples of simple and aromatic carbocycles, inorganic cyclic compounds, and heterocycles: Complex and polycyclic examples The following are examples of cyclic compounds exhibiting more complex ring systems and stereochemical features: See also Effective molarity Lactone Open-chain compound References Further reading Jürgen-Hinrich Fuhrhop & Gustav Penzlin, 1986, "Organic synthesis: concepts, methods, starting materials," Weinheim, BW, DEU:VCH, , see , accessed 19 June 2015. Michael B. Smith & Jerry March, 2007, "March's Advanced Organic Chemistry: Reactions, Mechanisms, and Structure," 6th Ed., New York, NY, USA:Wiley & Sons, , see , accessed 19 June 2015. Francis A. Carey & Richard J. Sundberg, 2006, "Title Advanced Organic Chemistry: Part A: Structure and Mechanisms," 4th Edn., New York, NY, USA:Springer Science & Business Media, , see , accessed 19 June 2015. Michael B. Smith, 2011, "Organic Chemistry: An Acid—Base Approach," Boca Raton, FL, USA:CRC Press, , see , accessed 19 June 2015. [May not be most necessary material for this article, but significant content here is available online.] Jonathan Clayden, Nick Greeves & Stuart Warren, 2012, "Organic Chemistry," Oxford, Oxon, GBR:Oxford University Press, , see , accessed 19 June 2015. László Kürti & Barbara Czakó, 2005, "Strategic Applications of Named Reactions in Organic Synthesis: Background and Detailed Mechanisms, Amsterdam, NH, NLD:Elsevier Academic Press, 2005ISBN 0124297854, see , accessed 19 June 2015. External links Molecular geometry
Cyclic compound
[ "Physics", "Chemistry" ]
3,635
[ "Molecular geometry", "Molecules", "Stereochemistry", "Matter" ]
3,949,132
https://en.wikipedia.org/wiki/Polycyclic%20compound
In the field of organic chemistry, a polycyclic compound is an organic compound featuring several closed rings of atoms, primarily carbon. These ring substructures include cycloalkanes, aromatics, and other ring types. They come in sizes of three atoms and upward, and in combinations of linkages that include tethering (such as in biaryls), fusing (edge-to-edge, such as in anthracene and steroids), links via a single atom (such as in spiro compounds), bridged compounds, and longifolene. Though poly- literally means "many", there is some latitude in determining how many rings are required to be considered polycyclic; many smaller rings are described by specific prefixes (e.g., bicyclic, tricyclic, tetracyclic, etc.), and so while it can refer to these, the title term is used with most specificity when these alternative names and prefixes are unavailable. In general, the term polycyclic includes polycyclic aromatic compounds, including polycyclic aromatic hydrocarbons, as well as heterocyclic aromatic compounds with multiple rings (where heteroaromatic compounds are aromatic compounds that contain sulfur, nitrogen, oxygen, or another non-carbon atoms in their rings in addition to carbon). An example of a polycyclic compound based on a nitrogen cage is hexanitrohexaazaisowurtzitane. Naming There is a scheme for naming polycyclic compounds using square brackets [] and numbers. (See and .) See also Polycyclic aromatic compound Polycyclic aromatic hydrocarbon References
Polycyclic compound
[ "Chemistry" ]
359
[ "Organic compounds", "Polycyclic organic compounds" ]
3,949,150
https://en.wikipedia.org/wiki/Isodesmic%20reaction
An isodesmic reaction is a chemical reaction in which the type of chemical bonds broken in the reactant are the same as the type of bonds formed in the reaction product. This type of reaction is often used as a hypothetical reaction in thermochemistry. An example of an isodesmic reaction is CH3− + CH3X → CH4 + CH2X− (1) X = F, Cl, Br, I Equation 1 describes the deprotonation of a methyl halide by a methyl anion. The energy change associated with this exothermic reaction which can be calculated in silico increases going from fluorine to chlorine to bromine and iodine making the CH2I− anion the most stable and least basic of all the halides. Although this reaction is isodesmic the energy change in this example also depends on the difference in bond energy of the C-X bond in the base and conjugate acid. In other cases, the difference may be due to steric strain. This difference is small in fluorine but large in iodine (in favor of the anion) and therefore the energy trend is as described despite the fact that C-F bonds are stronger than C-I bonds. The related term homodesmotic reaction also takes into account orbital hybridization and in addition there is no change in the number of carbon to hydrogen bonds. References Thermochemistry Computational chemistry
Isodesmic reaction
[ "Chemistry" ]
299
[ "Theoretical chemistry", "Computational chemistry", "Thermochemistry" ]
3,949,305
https://en.wikipedia.org/wiki/Tone%20hole
A tone hole is an opening in the body of a wind instrument which, when alternately closed and opened, changes the pitch of the sound produced. Tone holes may serve specific purposes, such as a trill hole or register hole. A tone hole is, "in wind instruments[,] a hole that may be stopped by the finger, or a key, to change the pitch of the tone produced." The resonant frequencies of the air column in a pipe are inversely proportional to the pipe's effective length. In other words, a shorter pipe produces higher notes. For a pipe with no tone holes but open at both ends, the effective length is the physical length of the pipe plus a little more for the small volumes of air just beyond the ends of the pipe that are also involved in the resonance. An open hole anywhere along the middle of the pipe shortens the pipe's effective length and therefore raises the pitch of the notes it produces. The closer an open hole is to the blowing end, the shorter the remaining effective length is and the more it raises the pitch. Generally, a hole in a given position doesn't reduce the effective length quite as much as cutting the pipe at that position would, and the smaller the hole, the less it reduces the effective length when open. Closing the hole increases the effective length and lowers the pitch again. However, a pipe with a closed tone hole is not acoustically identical to a pipe with no hole; the shape of the fingertip or pad that closes the hole modifies the pipe's internal volume and effective length. When there are multiple tone holes, the first (closest to the blowing end) open tone hole usually has the largest influence on the pipe's effective length. However, closing holes below the first open hole without closing the first hole can also lower the pitch significantly; such cross fingerings may often be useful. Generally, the pitch and timbre of the notes produced will depend on the positions, sizes, heights, and shapes of all the tone holes, both open and closed. Theoretical models allow these effects to be calculated with some accuracy, but the design of tone holes remains to some degree a matter of trial and error. Most woodwind instruments rely on tone holes to produce different pitches. Two common exceptions are the slide whistle and the overtone flutes. Most brass instruments use valves or a slide instead of tone holes, with the cornett, the ophicleide, the keyed trumpet, and the rare keyed bugle as exceptions. The modern reproduction of the natural trumpet, called the baroque trumpet, are fitted with tone holes (called vent holes) to correct the out of tune notes (written) B♭4, F5, A5, and B♭5. See also Saxophone tone hole Organ pipe References Acoustics Woodwind instrument parts and accessories Musical tuning Holes
Tone hole
[ "Physics" ]
580
[ "Classical mechanics", "Acoustics" ]
3,950,489
https://en.wikipedia.org/wiki/Diode%20modelling
In electronics, diode modelling refers to the mathematical models used to approximate the actual behaviour of real diodes to enable calculations and circuit analysis. A diode's I-V curve is nonlinear. A very accurate, but complicated, physical model composes the I-V curve from three exponentials with a slightly different steepness (i.e. ideality factor), which correspond to different recombination mechanisms in the device; at very large and very tiny currents the curve can be continued by linear segments (i.e. resistive behaviour). In a relatively good approximation a diode is modelled by the single-exponential Shockley diode law. This nonlinearity still complicates calculations in circuits involving diodes so even simpler models are often used. This article discusses the modelling of p-n junction diodes, but the techniques may be generalized to other solid state diodes. Large-signal modelling Shockley diode model The Shockley diode equation relates the diode current of a p-n junction diode to the diode voltage . This relationship is the diode I-V characteristic: , where is the saturation current or scale current of the diode (the magnitude of the current that flows for negative in excess of a few , typically 10−12A). The scale current is proportional to the cross-sectional area of the diode. Continuing with the symbols: is the thermal voltage (, about 26 mV at normal temperatures), and is known as the diode ideality factor (for silicon diodes is approximately 1 to 2). When the formula can be simplified to: . This expression is, however, only an approximation of a more complex I-V characteristic. Its applicability is particularly limited in case of ultra-shallow junctions, for which better analytical models exist. Diode-resistor circuit example To illustrate the complications in using this law, consider the problem of finding the voltage across the diode in Figure 1. Because the current flowing through the diode is the same as the current throughout the entire circuit, we can lay down another equation. By Kirchhoff's laws, the current flowing in the circuit is . These two equations determine the diode current and the diode voltage. To solve these two equations, we could substitute the current from the second equation into the first equation, and then try to rearrange the resulting equation to get in terms of . A difficulty with this method is that the diode law is nonlinear. Nonetheless, a formula expressing directly in terms of without involving can be obtained using the Lambert W-function, which is the inverse function of , that is, . This solution is discussed next. Explicit solution An explicit expression for the diode current can be obtained in terms of the Lambert W-function (also called the Omega function). A guide to these manipulations follows. A new variable is introduced as . Following the substitutions : and : rearrangement of the diode law in terms of w becomes: , which using the Lambert -function becomes . The final explicit solution being . With the approximations (valid for the most common values of the parameters) and , this solution becomes . Once the current is determined, the diode voltage can be found using either of the other equations. For large x, can be approximated by . For common physical parameters and resistances, will be on the order of 1040. Iterative solution The diode voltage can be found in terms of for any particular set of values by an iterative method using a calculator or computer. The diode law is rearranged by dividing by , and adding 1. The diode law becomes . By taking natural logarithms of both sides the exponential is removed, and the equation becomes . For any , this equation determines . However, also must satisfy the Kirchhoff's law equation, given above. This expression is substituted for to obtain , or . The voltage of the source is a known given value, but is on both sides of the equation, which forces an iterative solution: a starting value for is guessed and put into the right side of the equation. Carrying out the various operations on the right side, we come up with a new value for . This new value now is substituted on the right side, and so forth. If this iteration converges the values of become closer and closer together as the process continues, and we can stop iteration when the accuracy is sufficient. Once is found, can be found from the Kirchhoff's law equation. Sometimes an iterative procedure depends critically on the first guess. In this example, almost any first guess will do, say . Sometimes an iterative procedure does not converge at all: in this problem an iteration based on the exponential function does not converge, and that is why the equations were rearranged to use a logarithm. Finding a convergent iterative formulation is an art, and every problem is different. Graphical solution Graphical analysis is a simple way to derive a numerical solution to the transcendental equations describing the diode. As with most graphical methods, it has the advantage of easy visualization. By plotting the I-V curves, it is possible to obtain an approximate solution to any arbitrary degree of accuracy. This process is the graphical equivalent of the two previous approaches, which are more amenable to computer implementation. This method plots the two current-voltage equations on a graph and the point of intersection of the two curves satisfies both equations, giving the value of the current flowing through the circuit and the voltage across the diode. The figure illustrates such method. Piecewise linear model In practice, the graphical method is complicated and impractical for complex circuits. Another method of modelling a diode is called piecewise linear (PWL) modelling. In mathematics, this means taking a function and breaking it down into several linear segments. This method is used to approximate the diode characteristic curve as a series of linear segments. The real diode is modelled as 3 components in series: an ideal diode, a voltage source and a resistor. The figure shows a real diode I-V curve being approximated by a two-segment piecewise linear model. Typically the sloped line segment would be chosen tangent to the diode curve at the Q-point. Then the slope of this line is given by the reciprocal of the small-signal resistance of the diode at the Q-point. Mathematically idealized diode Firstly, consider a mathematically idealized diode. In such an ideal diode, if the diode is reverse biased, the current flowing through it is zero. This ideal diode starts conducting at 0 V and for any positive voltage an infinite current flows and the diode acts like a short circuit. The I-V characteristics of an ideal diode are shown below: Ideal diode in series with voltage source Now consider the case when we add a voltage source in series with the diode in the form shown below: When forward biased, the ideal diode is simply a short circuit and when reverse biased, an open circuit. If the anode of the diode is connected to 0V, the voltage at the cathode will be at Vt and so the potential at the cathode will be greater than the potential at the anode and the diode will be reverse biased. In order to get the diode to conduct, the voltage at the anode will need to be taken to Vt. This circuit approximates the cut-in voltage present in real diodes. The combined I-V characteristic of this circuit is shown below: The Shockley diode model can be used to predict the approximate value of . Using and : Typical values of the saturation current at room temperature are: for silicon diodes; for germanium diodes. As the variation of goes with the logarithm of the ratio , its value varies very little for a big variation of the ratio. The use of base 10 logarithms makes it easier to think in orders of magnitude. For a current of 1.0mA: for silicon diodes (9 orders of magnitude); for germanium diodes (3 orders of magnitude). For a current of 100mA: for silicon diodes (11 orders of magnitude); for germanium diodes (5 orders of magnitude). Values of 0.6 or 0.7 volts are commonly used for silicon diodes. Diode with voltage source and current-limiting resistor The last thing needed is a resistor to limit the current, as shown below: The I-V characteristic of the final circuit looks like this: The real diode now can be replaced with the combined ideal diode, voltage source and resistor and the circuit then is modelled using just linear elements. If the sloped-line segment is tangent to the real diode curve at the Q-point, this approximate circuit has the same small-signal circuit at the Q-point as the real diode. Dual PWL-diodes or 3-Line PWL model When more accuracy is desired in modelling the diode's turn-on characteristic, the model can be enhanced by doubling-up the standard PWL-model. This model uses two piecewise-linear diodes in parallel, as a way to model a single diode more accurately. Small-signal modelling Resistance Using the Shockley equation, the small-signal diode resistance of the diode can be derived about some operating point (Q-point) where the DC bias current is and the Q-point applied voltage is . To begin, the diode small-signal conductance is found, that is, the change in current in the diode caused by a small change in voltage across the diode, divided by this voltage change, namely: . The latter approximation assumes that the bias current is large enough so that the factor of 1 in the parentheses of the Shockley diode equation can be ignored. This approximation is accurate even at rather small voltages, because the thermal voltage at 300K, so tends to be large, meaning that the exponential is very large. Noting that the small-signal resistance is the reciprocal of the small-signal conductance just found, the diode resistance is independent of the ac current, but depends on the dc current, and is given as . Capacitance The charge in the diode carrying current is known to be , where is the forward transit time of charge carriers: The first term in the charge is the charge in transit across the diode when the current flows. The second term is the charge stored in the junction itself when it is viewed as a simple capacitor; that is, as a pair of electrodes with opposite charges on them. It is the charge stored on the diode by virtue of simply having a voltage across it, regardless of any current it conducts. In a similar fashion as before, the diode capacitance is the change in diode charge with diode voltage: , where is the junction capacitance and the first term is called the diffusion capacitance, because it is related to the current diffusing through the junction. Variation of forward voltage with temperature The Shockley diode equation has an exponential of , which would lead one to expect that the forward-voltage increases with temperature. In fact, this is generally not the case: as temperature rises, the saturation current rises, and this effect dominates. So as the diode becomes hotter, the forward-voltage (for a given current) decreases. Here is some detailed experimental data, which shows this for a 1N4005 silicon diode. In fact, some silicon diodes are used as temperature sensors; for example, the CY7 series from OMEGA has a forward voltage of 1.02V in liquid nitrogen (77K), 0.54V at room temperature, and 0.29V at 100 °C. In addition, there is a small change of the material parameter bandgap with temperature. For LEDs, this bandgap change also shifts their colour: they move towards the blue end of the spectrum when cooled. Since the diode forward-voltage drops as its temperature rises, this can lead to thermal runaway due to current hogging when paralleled in bipolar-transistor circuits (since the base-emitter junction of a BJT acts as a diode), where a reduction in the base-emitter forward voltage leads to an increase in collector power-dissipation, which in turn reduces the required base-emitter forward voltage even further. See also Bipolar junction transistor Semiconductor device modelling References Electronic device modeling
Diode modelling
[ "Physics" ]
2,589
[ "Electronic device modeling" ]
23,133,830
https://en.wikipedia.org/wiki/Recuperative%20multi-tube%20cooler
A recuperative multi-tube cooler is a rotary drum cooler used for continuous processes in chemical engineering. Construction Recuperative multi-tube coolers essentially exist of a turning rotor which is mostly driven via chain. At the ends of the rotor are stiff cases for product feed and outlet. The rotor is supported on running treads, as it is typical for rotary drums. The interior of the rotor exists of several tubes in a revolvertype (or planetary) arrangement. The tubes are completely surrounded by a jacket. According to requirements recuperative multi-tube coolers are built with diameters between 1.0 and 4.0 m and lengths from 10 to 40 m. Function Recuperative multi-tube coolers work with indirect air cooling. That means, that there is no direct contact between the product to be cooled and the cooling air. The heat is exchanged indirectly via thermal conduction. Ambient air is used as cooling air, which is drawn between the jacket and the tubes. Product and cooling air pass through the cooler in counterflow. The product to be cooled falls directly into the product feed housing. By the rotary movement and a little slope of the rotor, the product is conveyed through the cooler. The rotation causes a permanent mixing of the product in the tubes and hence a good heat transfer. Due to the indirect method of operation, the coolers provide hot and clean air that can be reused as energy. This opportunity of recovering energy is where the term recuperative results from. Applications The coolers can be used for the cooling of free flowing, fine grained bulk material. They are especially used when consumers of the recovered hot air are close-by. Usual this is the case in calcination processes after hotgas fired rotary kilns in or similar. The hot air is used as preheated supply of combustion air in the kilns. The consumption of primary energy can be reduced seriously. The coolers are mostly used in the pigment industry, e.g. for cooling of titandioxide pigments after calcination. The entry temperatures of the products can reach up to 1000 °C. External links Pictures and explanations Heat exchangers Cooling technology
Recuperative multi-tube cooler
[ "Chemistry", "Engineering" ]
441
[ "Chemical equipment", "Heat exchangers" ]
23,136,449
https://en.wikipedia.org/wiki/Indirect%20land%20use%20change%20impacts%20of%20biofuels
The indirect land use change impacts of biofuels, also known as ILUC or iLUC (pronounced as i-luck), relates to the unintended consequence of releasing more carbon emissions due to land-use changes around the world induced by the expansion of croplands for ethanol or biodiesel production in response to the increased global demand for biofuels. As farmers worldwide respond to higher crop prices in order to maintain the global food supply-and-demand balance, pristine lands are cleared to replace the food crops that were diverted elsewhere to biofuels' production. Because natural lands, such as rainforests and grasslands, store carbon in their soil and biomass as plants grow each year, clearance of wilderness for new farms translates to a net increase in greenhouse gas emissions. Due to this off-site change in the carbon stock of the soil and the biomass, indirect land use change has consequences in the greenhouse gas (GHG) balance of a biofuel. Other authors have also argued that indirect land use changes produce other significant social and environmental impacts, affecting biodiversity, water quality, food prices and supply, land tenure, worker migration, and community and cultural stability. History The estimates of carbon intensity for a given biofuel depend on the assumptions regarding several variables. As of 2008, multiple full life cycle studies had found that corn ethanol, cellulosic ethanol and Brazilian sugarcane ethanol produce lower greenhouse gas emissions than gasoline. None of these studies, however, considered the effects of indirect land-use changes, and though land use impacts were acknowledged, estimation was considered too complex and difficult to model. A controversial paper published in February 2008 in Sciencexpress by a team led by Searchinger from Princeton University concluded that such effects offset the (positive) direct effects of both corn and cellulosic ethanol and that Brazilian sugarcane performed better, but still resulted in a small carbon debt. After the Searchinger team paper, estimation of carbon emissions from ILUC, together with the food vs. fuel debate, became one of the most contentious issues relating to biofuels, debated in the popular media, scientific journals, op-eds and public letters from the scientific community, and the ethanol industry, both American and Brazilian. This controversy intensified in April 2009 when the California Air Resources Board (CARB) set rules that included ILUC impacts to establish the California Low-Carbon Fuel Standard that entered into force in 2011. In May 2009 U.S. Environmental Protection Agency (EPA) released a notice of proposed rulemaking for implementation of the 2007 modification of the Renewable Fuel Standard (RFS). EPA's proposed regulations also included ILUC, causing additional controversy among ethanol producers. EPA's February 3, 2010 final rule incorporated ILUC based on modelling that was significantly improved over the initial estimates. The UK Renewable Transport Fuel Obligation program requires the Renewable Fuels Agency (RFA) to report potential indirect impacts of biofuel production, including indirect land use change or changes to food and other commodity prices. A July 2008 RFA study, known as the Gallager Review, found several risks and uncertainties, and that the "quantification of GHG emissions from indirect land-use change requires subjective assumptions and contains considerable uncertainty", and required further examination to properly incorporate indirect effects into calculation methodologies. A similarly cautious approach was followed by the European Union. In December 2008 the European Parliament adopted more stringent sustainability criteria for biofuels and directed the European Commission to develop a methodology to factor in GHG emissions from indirect land use change. Studies and controversy Before 2008, several full life cycle ("Well to Wheels" or WTW) studies had found that corn ethanol reduced transport-related greenhouse gas emissions. In 2007 a University of California, Berkeley team led by Farrel evaluated six previous studies, concluding that corn ethanol reduced GHG emissions by only 13 percent. However, 20 to 30 percent reduction for corn ethanol, and 85 to 85 percent for cellulosic ethanol, both figures estimated by Wang from Argonne National Laboratory, are more commonly cited. Wang reviewed 22 studies conducted between 1979 and 2005, and ran simulations with Argonne's GREET model. These studies accounted for direct land use changes. Several studies of Brazilian sugarcane ethanol showed that sugarcane as feedstock reduces GHG by 86 to 90 percent given no significant land use change. Estimates of carbon intensity depend on crop productivity, agricultural practices, power sources for ethanol distilleries and the energy efficiency of the distillery. None of these studies considered ILUC, due to estimation difficulties. Preliminary estimates by Delucchi from the University of California, Davis, suggested that carbon released by new lands converted to agricultural use was a large percentage of life-cycle emissions. Searchinger and Fargione studies In 2008 Timothy Searchinger, a lawyer from Environmental Defense Fund, concluded that ILUC affects the life cycle assessment and that instead of saving, both corn and cellulosic ethanol increased carbon emissions as compared to gasoline by 93 and 50 percent respectively. Ethanol from Brazilian sugarcane performed better, recovering initial carbon emissions in 4 years, while U.S. corn ethanol required 167 years and cellulosic ethanol required a 52 years payback period. The study limited the analysis a 30-year period, assuming that land conversion emits 25 percent of the carbon stored in soils and all carbon in plants cleared for cultivation. Brazil, China, and India were considered among the overseas locations where land use change would occur as a result of diverting U.S. corn cropland, and it was assumed that new cropland in each of these regions correspond to different types of forest, savanna or grassland based on the historical proportion of each converted to cultivation in these countries during the 1990s. Fargione and his team published a separate paper in the same issue of Science claiming that clearing lands to produce biofuel feedstock created a carbon deficit. This deficit applies to both direct and indirect land use changes. The study examined six conversion scenarios: Brazilian Amazon to soybean biodiesel, Brazilian Cerrado to soybean biodiesel, Brazilian Cerrado to sugarcane ethanol, Indonesian or Malaysian lowland tropical rainforest to palm biodiesel, Indonesian or Malaysian peatland tropical rainforest to palm biodiesel, and U.S. Central grassland to corn ethanol. The carbon debt was defined as the amount of released during the first 50 years of this process of land conversion. For the two most common ethanol feedstocks, the study found that sugarcane ethanol produced on natural cerrado lands would take about 17 years to repay its carbon debt, while corn ethanol produced on U.S. central grasslands would result in a repayment time of about 93 years. The worst-case scenario is converting Indonesian or Malaysian tropical peatland rainforest to palm biodiesel production, which would require about 420 years to repay. Criticism and controversy The Searchinger and Fargione studies created controversy in both the popular media and in scientific journals. Robert Zubrin observed that Searchinger's "indirect analysis" approach is pseudo-scientific and can be used to "prove anything". Wang and Haq from Argonne National Laboratory claimed: the assumptions were outdated; they ignored the potential of increased efficiency, and no evidence showed that "U.S. corn ethanol production has so far caused indirect land use in other countries." They concluded that Searchinger demonstrated that ILUC "is much more difficult to model than direct land use changes". In his response, Searchinger rebutted each technical objection and asserted that "... any calculation that ignores these emissions, however challenging it is to predict them with certainty, is too incomplete to provide a basis for policy decisions." Another criticism, by Kline and Dale from Oak Ridge National Laboratory, held that Searchinger et al. and Fargione et al. "... do not provide adequate support for their claim that bioufuels cause high emissions due to land-use change", as their conclusions depends on a misleading assumption because more comprehensive field research found that these land use changes "... are driven by interactions among cultural, technological, biophysical, economic, and demographic forces within a spatial and temporal context rather than by a single crop market". Fargione et al. responded in part that although many factors contributed to land clearing, this "observation does not diminish the fact that biofuels also contribute to land clearing if they are produced on existing cropland or on newly cleared lands". Searchinger disagreed with all of Kline and Dale's arguments. The U.S. biofuel industry also reacted, claiming that the "Searchinger study is clearly a 'worst case scenario' analysis ..." and that this study "relies on a long series of highly subjective assumptions ..." Searchinger rebutted each claim, concluding that NFA's criticisms were invalid. He noted that even if some of his assumptions are high estimates, the study also made many conservative assumptions. Brazil In February 2010, Lapola estimated that the planned expansion of Brazilian sugarcane and soybean biofuel plantations through 2020 would replace rangeland with a small direct land-use impact on carbon emissions. However, the expansion of the rangeland frontier into Amazonian forests, driven by cattle ranching, would indirectly offset the savings. "Sugarcane ethanol and soybean biodiesel each contributes to nearly half of the projected indirect deforestation of 121,970 km2 by 2020, creating a carbon debt that would take about 250 years to be repaid..." The research also found that oil palm would cause the least land-use changes and associated carbon debt. The analysis also modeled livestock density increases and found that "a higher increase of 0.13 head per hectare in the average livestock density throughout the country could avoid the indirect land-use changes caused by biofuels (even with soybean as the biodiesel feedstock), while still fulfilling all food and bioenergy demands." The authors conclude that intensification of cattle ranching and concentration on oil palm are required to achieve effective carbon savings, recommending closer collaboration between the biofuel and cattle-ranching sectors. The main Brazilian ethanol industry organization (UNICA) commented that such studies missed the continuing intensification of cattle production already underway. A study by Arima et al. published in May 2011, used spatial regression modeling to provide the first statistical assessment of ILUC for the Brazilian Amazon due to soy production. Previously, the indirect impacts of soy crops were only anecdotal or analyzed through demand models at a global scale, while the study took a regional approach. The analysis showed a strong signal linking the expansion of soybean fields in settled agricultural areas at the southern and eastern rims of the Amazon basin to pasture encroachments for cattle production on the forest frontier. The results demonstrate the need to include ILUC in measuring the carbon footprint of soy crops, whether produced for biofuels or other end-uses. The Arima study is based on 761 municipalities located in the Legal Amazon of Brazil and found that between 2003 and 2008, soybean areas expanded by 39,100 km2 in the basin's agricultural areas, mainly in Mato Grosso. The model showed that a 10% (3,910 km2) reduction of soy in old pasture areas would have led to a reduction in deforestation of up to 40% (26,039 km2) in heavily forested municipalities of the Brazilian Amazon. The analysis showed that the displacement of cattle production due to agricultural expansion drives land use change in municipalities located hundreds of kilometers away. The Amazonian ILUC is not only measurable, but its impact is significant. Implementation United States California LCFS On April 23, 2009, California Air Resources Board (CARB) approved the specific rules and carbon intensity reference values for the California Low-Carbon Fuel Standard (LCFS) that take effect January 1, 2011. CARB's rulemaking included ILUC. For some biofuels, CARB identified land use changes as a significant source of additional GHG emissions. It established one standard for gasoline and alternative fuels, and a second for diesel fuel and its replacements. Controversy The public consultation process before the ruling, and the ruling itself were controversial, yielding 229 comments. ILUC was one of the most contentious issues. On June 24, 2008, 27 scientists and researchers submitted a letter saying, "As researchers and scientists in the field of biomass to biofuel conversion, we are convinced that there simply is not enough hard empirical data to base any sound policy regulation in regards to the indirect impacts of renewable biofuels production. The field is relative new, especially when compared to the vast knowledge base present in fossil fuel production, and the limited analyses are driven by assumptions that sometimes lack robust empirical validation." The New Fuels Alliance, representing more than two-dozen biofuel companies, researchers and investors, questioned the Board intention to include indirect land use change effects into account, wrote "While it is likely true that zero is not the right number for the indirect effects of any product in the real world, enforcing indirect effects in a piecemeal way could have very serious consequences for the LCFS.... The argument that zero is not the right number does not justify enforcing a different wrong number, or penalizing one fuel for one category of indirect effects while giving another fuel pathway a free pass." On the other hand, more than 170 scientists and economists urged that CARB "include indirect land use change in the lifecycle analyses of heat-trapping emissions from biofuels and other transportation fuels. This policy will encourage development of sustainable, low-carbon fuels that avoid conflict with food and minimize harmful environmental impacts.... There are uncertainties inherent in estimating the magnitude of indirect land use emissions from biofuels, but assigning a value of zero is clearly not supported by the science." Industry representatives complained that the final rule overstated the environmental effects of corn ethanol and criticized the inclusion of ILUC as an unfair penalty to domestic corn ethanol because deforestation in the developing world was tied to U.S. ethanol production. The 2011 limit for LCFS means that Mid-west corn ethanol failed, unless current carbon intensity was reduced. Oil industry representatives complained that the standard left oil refiners with few options, such as Brazilian sugarcane ethanol, with its accompanying tariff. CARB officials and environmentalists counter that time and economic incentives will allow produces to adapt. UNICA welcomed the ruling, while urging CARB to reflect Brazilian practices better, lowering their estimates of Brazilian emissions. The only Board member who voted against the ruling explained that he had a "hard time accepting the fact that we're going to ignore the comments of 125 scientists", referring to the letter submitted by a group of scientists questioning the ILUC penalty. "They said the model was not good enough ... to use at this time as a component part of such an historic new standard." CARB advanced the expected date for an expert working group to report on ILUC with refined estimates from January 2012 to January 2011. In December 2009, the Renewable Fuels Association (RFA) and Growth Energy, two U.S. ethanol lobbying groups, filed a lawsuit challenging LCFS' constitutionality. The two organizations argued that LCFS violated both the Supremacy Clause and the Commerce Clause, jeopardizing the nationwide ethanol market. EPA Renewable Fuel Standard The Energy Independence and Security Act of 2007 (EISA) established new renewable fuel categories and eligibility requirements, setting mandatory lifecycle emissions limits. EISA explicitly mandated EPA to include "direct emissions and significant indirect emissions such as significant emissions from land use changes." EISA required a 20% reduction in lifecycle GHG emissions for any fuel produced at facilities that commenced construction after December 19, 2007, to be classified as a "renewable fuel"; a 50% reduction for fuels to be classified as "biomass-based diesel" or "advanced biofuel", and a 60% reduction to be classified as "cellulosic biofuel". EISA provided limited flexibility to adjust these thresholds downward by up to 10 percent, and EPA proposed this adjustment for the advanced biofuels category. Existing plants were grandfathered in. On May 5, 2009, EPA released a notice of proposed rulemaking for implementation of the 2007 modification of the Renewable Fuel Standard, known as RFS2. The draft of the regulations was released for public comment during a 60-day period, a public hearing was held on June 9, 2009, and also a workshop was conducted on June 10–11, 2009. EPA's draft analysis stated that ILUC could produce significant near-term GHG emissions due to land conversion but that biofuels can pay these back over subsequent years. EPA highlighted two scenarios, varying the time horizon and the discount rate for valuing emissions. The first assumed 30-year period uses a 0 percent discount rate (valuing emissions equally regardless of timing). The second scenario used 100 years and a 2% discount rate. On the same day that EPA published its notice of proposed rulemaking, President Obama signed a Presidential Directive seeking to advance biofuels research and commercialization. The Directive established the Biofuels Interagency Working Group, to develop policy ideas for increasing investment in next-generation fuels and for reducing their environmental footprint. The inclusion of ILUC in the proposed ruling provoked complaints from ethanol and biodiesel producers. Several environmental organizations welcomed the inclusion of ILUC but criticized the consideration of a 100-year payback scenario, arguing that it underestimated land conversion effects. American corn growers, biodiesel producers, ethanol producers and Brazilian sugarcane ethanol producers complained about EPA's methodology, while the oil industry requested an implementation delay. On June 26, 2009, the House of Representatives approved the American Clean Energy and Security Act 219 to 212, mandating EPA to exclude ILUC for a 5-year period, vis a vis RFS2. During this period, more research is to be conducted to develop more reliable models and methodologies for estimating ILUC, and Congress will review this issue before allowing EPA to rule on this matter. The bill failed in the U.S. Senate. On February 3, 2010, EPA issued its final RFS2 rule for 2010 and beyond. The rule incorporated direct and significant indirect emissions including ILUC. EPA incorporated comments and data from new studies. Using a 30-year time horizon and a 0% discount rate, EPA concluded that multiple biofuels would meet this standard. EPA's analysis accepted both ethanol produced from corn starch and biobutanol from corn starch as "renewable fuels". Ethanol produced from sugarcane became an "advanced fuel". Both diesel produced from algal oils and biodiesel from soy oil and diesel from waste oils, fats, and greases fell in the "biomass-based diesel" category. Cellulosic ethanol and cellulosic diesel met the "cellulosic biofuel" standard. The table summarizes the mean GHG emissions estimated by EPA modelling and the range of variations considering that the main source of uncertainty in the life cycle analysis is the GHG emissions related to international land use change. Reactions UNICA welcomed the ruling, in particular, for the more precise lifecycle emissions estimate and hoped that classification the advanced biofuel designation would help eliminate the tariff. The U.S. Renewable Fuels Association (RFA) also welcomed the ruling, as ethanol producers "require stable federal policy that provides them the market assurances they need to commercialize new technologies", restating their ILUC objection. RFA also complained that corn-based ethanol scored only a 21% reduction, noting that without ILUC, corn ethanol achieves a 52% GHG reduction. RFA also objected that Brazilian sugarcane ethanol "benefited disproportionally" because EPA's revisions lowered the initially equal ILUC estimates by half for corn and 93% for sugarcane. Several Midwestern lawmakers commented that they continued to oppose EPA's consideration of the "dicey science" of indirect land use that "punishes domestic fuels". House Agriculture Chairman Collin Peterson said, "... to think that we can credibly measure the impact of international indirect land use is completely unrealistic, and I will continue to push for legislation that prevents unreliable methods and unfair standards from burdening the biofuels industry." EPA Administrator Lisa P. Jackson commented that the agency "did not back down from considering land use in its final rules, but the agency took new information into account that led to a more favorable calculation for ethanol". She cited new science and better data on crop yield and productivity, more information on co-products that could be produced from advanced biofuels and expanded land-use data for 160 countries, instead of the 40 considered in the proposed rule. Europe As of 2010, European Union and United Kingdom regulators had recognized the need to take ILUC into account, but had not determined the most appropriate methodology. UK Renewable Transport Fuel Obligation The UK Renewable Transport Fuel Obligation (RTFO) program requires fuel suppliers to report direct impacts, and asked the Renewable Fuels Agency (RFA) to report potential indirect impacts, including ILUC and commodity price changes. The RFA's July 2008 "Gallager Review", mentioned several risks regarding biofuels and required feedstock production to avoid agricultural land that would otherwise be used for food production, despite concluding that "quantification of GHG emissions from indirect land-use change requires subjective assumptions and contains considerable uncertainty". Some environmental groups argued that emissions from ILUC were not being taken into account and could be creating more emissions. European Union On December 17, 2008, the European Parliament approved the Renewable Energy Sources Directive (COM(2008)19) and amendments to the Fuel Quality Directive (Directive 2009/30), which included sustainability criteria for biofuels and mandated consideration of ILUC. The Directive established a 10% biofuel target. A separate Fuel Quality Directive set the EU's Low Carbon Fuel Standard, requiring a 6% reduction in GHG intensity of EU transport fuels by 2020. The legislation ordered the European Commission to develop a methodology to factor in GHG emissions from ILUC by December 31, 2010, based on the best available scientific evidence. In the meantime, the European Parliament defined lands ineligible for producing biofuel feedstocks for the Directives. This category included wetlands and continuously forested areas with canopy cover of more than 30 percent or cover between 10 and 30 percent given evidence that its existing carbon stock was low enough to justify conversion. The Commission subsequently published terms of reference for three ILUC modeling exercises: one using a General Equilibrium model; one using a Partial Equilibrium model and one comparing other global modeling exercises. It also consulted on a limited range of high-level options for addressing ILUC to which 17 countries and 59 organizations responded. The United Nations Special Rapporteur on the Right to Food and several environmental organizations complained that the 2008 safeguards were inadequate. UNICA called for regulators to establish an empirical and "globally accepted methodology" to consider ILUC, with the participation of researchers and scientists from biofuel crop-producing countries. In 2010 some NGOs accused the European Commission of lacking transparency given its reluctance to release documents relating to the ILUC work. In March 2010 the Partial and General Equilibrium Modelling results were made available, with the disclaimer that the EC had not adopted the views contained in the materials. These indicate that a 1.25% increase in EU biofuel consumption would require around of land globally. The scenarios varied from 5.6 to 8.6% of road transport fuels. The study found that ILUC effects offset part of the emission benefits and that above the 5.6% threshold, ILUC emissions increase rapidly. For the expected scenario of 5.6% by 2020, the study estimated that biodiesel production increases would be primarily domestic, while bioethanol production would take place mainly in Brazil, regardless of EU duties. The analysis concluded that eliminating trade barriers would further reduce emissions, because the EU would import more from Brazil. Under this scenario, "direct emission savings from biofuels are estimated at 18 Mt , additional emissions from ILUC at 5.3 Mt (mostly in Brazil), resulting in a global net balance of nearly 13 Mt savings in a 20 years horizon". The study also found that ILUC emissions were much greater for biodiesel from vegetable oil and estimated that in 2020 even at the 5.6% level were over half the greenhouse gas emissions from diesel. As part of the announcement, the Commission said it would publish a report on ILUC by the end of 2010. Certification system On June 10, 2010, the EC announced its decision to set up certification schemes for biofuels, including imports as part of the Renewable Energy Directive. The Commission encouraged E.U. nations, industry, and NGOs to set up voluntary certification schemes. EC figures for 2007 showed that 26% of biodiesel and 31% of bioethanol used in the E.U. was imported, mainly from Brazil and the United States. Reactions UNICA welcomed the EU efforts to "engage independent experts in its assessments" but requested that improvements because "... the report currently contains a certain number of inaccuracies, so once these are corrected, we anticipate even higher benefits resulting from the use of Brazilian sugarcane ethanol." UNICA highlighted the fact that the report assumed land expansion that "does not take into consideration the agro-ecological zoning for sugarcane in Brazil, which prevents cane from expanding into any type of native vegetation." Critics said the 10% figure was reduced to 5.6% of transport fuels partly by exaggerating the contribution of electric vehicles (EV) in 2020, as the study assumed EVs would represent 20% of new car sales, two and six times the car industry's own estimate. They also claimed the study "exaggerates to around 45 percent the contribution of bioethanol—the greenest of all biofuels—and consequently downplays the worst impacts of biodiesel." Environmental groups found that the measures "are too weak to halt a dramatic increase in deforestation". According to Greenpeace, "indirect land-use change impacts of biofuel production still are not properly addressed", which for them was the most dangerous problem of biofuels Industry representatives welcomed the certification system, but some dismissed concerns regarding the lack of land use criteria. UNICA and other industry groups wanted the gaps in the rules filled to provide a clear operating framework. The negotiations between the European Parliament and the Council of European Ministers continue. A deal is not foreseen before 2014. See also References External links Amendments to the European Renewable Energy Sources Directive (approved December 17, 2008. CARB: Detailed California-modified GREET pathway for U.S. corn ethanol (February 27, 2009, version 2.1) CARB: Detailed California-modified GREET pathway for Brazilian sugarcane ethanol (February 27, 2009, version 2.1) CARB: Proposed Regulation to Implement the Low Carbon Fuel Standard (approved April 23, 2009) Biofuels technology Energy policy Environmental impact of the energy industry Sustainable transport Sustainable energy
Indirect land use change impacts of biofuels
[ "Physics", "Biology", "Environmental_science" ]
5,546
[ "Biofuels technology", "Energy policy", "Physical systems", "Transport", "Sustainable transport", "Environmental social science" ]
23,139,945
https://en.wikipedia.org/wiki/Hot%20spot%20effect%20in%20subatomic%20physics
Hot spots in subatomic physics are regions of high energy density or temperature in hadronic or nuclear matter. Finite size effects Hot spots are a manifestation of the finite size of the system: in subatomic physics this refers both to atomic nuclei, which consist of nucleons, as well as to nucleons themselves, which are made of quarks and gluons, Other manifestations of finite sizes of these systems are seen in scattering of electrons on nuclei and nucleons. For nuclei in particular finite size effects manifest themselves also in the isomeric shift and isotopic shift. Statistical methods in subatomic physics The formation of hot spots assumes the establishment of local equilibrium, which in its turn occurs if the thermal conductivity in the medium is sufficiently small. The notions of equilibrium and heat are statistical. The use of statistical methods assumes a large number of degrees of freedom. In macroscopic physics this number usually refers to the number of atoms or molecules, while in nuclear and particle physics it refers to the energy level density. Hot spots in nucleons Local equilibrium is the precursor of global equilibrium and the hot spot effect can be used to determine how fast, if at all, the transition from local to global equilibrium takes place. That this transition does not always happen follows from the fact that the duration of a strong interaction reaction is quite short (of the order of 10−22–10−23 seconds) and the propagation of "heat", i.e. of the excitation, through the finite sized body of the system takes a finite time, which is determined by the thermal conductivity of the matter the system is made of. Indications of the transition between local and global equilibrium in strong interaction particle physics started to emerge in the 1960s and early 1970s. In high-energy strong interactions equilibrium is usually not complete. In these reactions, with the increase of laboratory energy one observes that the transverse momenta of produced particles have a tail, which deviates from the single exponential Boltzmann spectrum, characteristic for global equilibrium. The slope or the effective temperature of this transverse momentum tail increases with increasing energy. These large transverse momenta were interpreted as being due to particles, which "leak" out before equilibrium is reached. Similar observations had been made in nuclear reactions and were also attributed to pre-equilibrium effects. This interpretation suggested that the equilibrium is neither instantaneous, nor global, but rather local in space and time. By predicting a specific asymmetry in peripheral high-energy hadron reactions based on the hot spot effect Richard M. Weiner proposed a direct test of this hypothesis as well as of the assumption that the heat conductivity in hadronic matter is relatively small. The theoretical analysis of the hot spot effect in terms of propagation of heat was performed in Ref. In high-energy hadron reactions one distinguishes peripheral reactions with low multiplicity and central collisions with high multiplicity. Peripheral reactions are also characterized by the existence of a leading particle which retains a large proportion of the incoming energy. By taking the notion of peripheral literally Ref.2 suggested that in this kind of reaction the surface of the colliding hadrons is locally excited giving rise to a hot spot, which is de-excited by two processes: 1) emission of particles into the vacuum 2) propagation of “heat” into the body of the target (projectile) wherefrom it is eventually also emitted through particle production. Particles produced in process 1) will have higher energies than those due to process 2), because in the latter process the excitation energy is in part degraded. This gives rise to an asymmetry with respect to the leading particle, which should be detectable in an experimental event by event analysis. This effect was confirmed by Jacques Goldberg in K− p→ K− p π+ π− reactions at 14 GEV/c. This experiment represents the first observation of local equilibrium in hadronic interactions, allowing in principle a quantitative determination of heat conductivity in hadronic matter along the lines of Ref.3. This observation came as a surprise, because, although the electron proton scattering experiments had shown beyond any doubt that the nucleon had a finite size, it was a-priori not clear whether this size was sufficiently big for the hot spot effect to be observable, i. e. whether heat conductivity in hadronic matters was sufficiently small. Experiment4 suggests that this is the case. Hot spots in nuclei In atomic nuclei, because of their larger dimensions as compared with nucleons, statistical and thermodynamical concepts have been used already in the 1930s. Hans Bethe had suggested that propagation of heat in nuclear matter could be studied in central collisions and Sin-Itiro Tomonaga had calculated the corresponding heat conductivity. The interest in this phenomenon was resurrected in the 1970s by the work of Weiner and Weström who established the link between the hot spot model and the pre-equilibrium approach used in low-energy heavy-ion reactions. Experimentally the hot spot model in nuclear reactions was confirmed in a series of investigations some of which of rather sophisticated nature including polarization measurements of protons and gamma rays. Subsequently on the theoretical side the link between hot spots and limiting fragmentation and transparency in high-energy heavy ion reactions was analyzed and “drifting hot spots” for central collisions were studied. With the advent of heavy ion accelerators experimental studies of hot spots in nuclear matter became a subject of current interest and a series of special meetings was dedicated to the topic of local equilibrium in strong interactions. The phenomena of hot spots, heat conduction and preequilibrium play also an important part in high-energy heavy ion reactions and in the search for the phase transition to quark matter. Hot spots and solitons Solitary waves (solitons) are a possible physical mechanism for the creation of hot spots in nuclear interactions. Solitons are a solution of the hydrodynamic equations characterized by a stable localized high density region and small spatial volume. They were predicted to appear in low-energy heavy ion collisions at velocities of the projectile slightly exceeding the velocity of sound (E/A ~ 10-20 MeV; here E is the incoming energy and A the atomic number). Possible evidence for this phenomenon is provided by the experimental observation that the linear momentum transfer in 12C induced heavy-ion reactions is limited. References Particle physics
Hot spot effect in subatomic physics
[ "Physics" ]
1,298
[ "Particle physics" ]
37,302,995
https://en.wikipedia.org/wiki/Lehrbuch%20der%20Topologie
In mathematics, Lehrbuch der Topologie (German for "textbook of topology") is a book by Herbert Seifert and William Threlfall, first published in 1934 and published in an English translation in 1980. It was one of the earliest textbooks on algebraic topology, and was the standard reference on this topic for many years. Albert W. Tucker wrote a review. Notes References Reprinted by Chelsea Publishing Company 1947 and AMS 2004. History of mathematics Mathematics textbooks Algebraic topology 1934 non-fiction books German books
Lehrbuch der Topologie
[ "Mathematics" ]
106
[ "Fields of abstract algebra", "Topology", "Algebraic topology" ]
37,305,544
https://en.wikipedia.org/wiki/C34H46O18
{{DISPLAYTITLE:C34H46O18}} The molecular formula C34H46O18 (molar mass: 742.72 g/mol, exact mass: 742.2684 u) may refer to: Eleutheroside D Liriodendrin Molecular formulas
C34H46O18
[ "Physics", "Chemistry" ]
66
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
37,307,181
https://en.wikipedia.org/wiki/Brookhart%27s%20acid
Brookhart's acid is the salt of the diethyl ether oxonium ion and tetrakis[3,5-bis(trifluoromethyl)phenyl]borate (BAr′4). It is a colorless solid, used as a strong acid. The compound was first reported by Volpe, Grant, and Brookhart in 1992. Preparation This compound is prepared by treatment of NaBAr′4 in diethyl ether (Et2O) with hydrogen chloride: NaBAr′4 + HCl + 2 Et2O → [H(OEt2)2]+ + NaCl NaBAr′4 is soluble in diethyl ether, whereas sodium chloride is not. Precipitation of sodium chloride thus drives the formation of the oxonium acid compound, which is isolable as a solid. Structure and properties The acid crystallizes as a white, hygroscopic crystalline solid. NMR and elemental analysis showed that the crystal contains two equivalents of diethyl ether. In solution, the compound slowly degrades to m-C6H3(CF3)2 and BAr′3. [H(OEt2)2][B(C6F5)4] is a related compound with a slightly different weakly coordinating anion; it was first reported in 2000. An X-ray crystal structure of that compound was obtained, showing the acidic proton coordinated by both ethereal oxygen centers, although the crystal was not good enough to determine whether the proton is located symmetrically or unsymmetrically between the two. Uses Traditional weakly coordinating anions, such as perchlorate, tetrafluoroborate, and hexafluorophosphate, will nonetheless coordinate to very electrophilic cations, making these counterions unsuitable for some complexes. The highly reactive species [Cp2Zr(CH3)]+, for example, has been reported to abstract F− from PF6. Starting in the 1980s, new types of weakly coordinating anions began to be developed. BAr′4 anions are used as counterions for highly electrophilic, cationic transition metal species, as they are very weakly coordinating and unreactive towards electrophilic attack. One common method of generating these cationic species is via protonolysis of a dialkyl complexes or an olefin complex. For example, an electrophilic palladium catalyst, [(2,2′-bipyridine)Pd(CH3)(CH3CN)][BAr′4], is prepared by protonating the dimethyl complex with Brookhart's acid. This electrophilic, cationic palladium species is used for the polymerization of olefins with carbon monoxide to polyketones in aprotic solvents. Potential application Polyketones, thermoplastic polymers, are formed by the copolymerisation of carbon monoxide and one or more alkenes (typically ethylene with propylene). The process utilises a palladium(II) catalyst with a bidentate ligand like 2,2′-bipyridine or 1,10-phenanthroline (phen) with a non-coordinating BARF counterion, such as [(phen)Pd(CH3)(CO)]BArF4. The preparation of the catalyst involves the reaction of a dimethyl palladium complex with Brookhart's acid in acetonitrile with loss of methane and the catalytic species is formed by uptake of carbon monoxide to displace acetonitrile. [(Et2O)2H]BArF4   +   [(phen)Pd(CH3)2]   +   MeCN   →   [(phen)Pd(CH3)(MeCN)]BArF4   +   2 Et2O   +   CH4 [(phen)Pd(CH3)(MeCN)]BArF4   +   CO   → [(phen)Pd(CH3)(CO)]BArF4   +   MeCN The mechanism involves migratory insertion whereby the polymer chain is bound to the catalytic centre and grows by the sequential insertion of carbon monoxide and the alkene between the palladium atom and the existing chain. Defects occur when insertions do not alternate – that is, a carbon monoxide insertion follows a carbon monoxide insertion or an alkene insertion follows an alkene insertion – these are highlighted in red in the figure below. This catalyst produces a very low rate of defects due to the difference in Gibbs energy of activation of each insertion – the energy barrier to inserting an alkene immediately following an alkene insertion is ~12 kJ mol−1 higher than barrier to carbon monoxide insertion. Use of monodentate phosphine ligands also leads to undesirable side-products but bidentate phosphine ligands like 1,3-bis(diphenylphosphino)propane have been used industrially. References Acids Non-coordinating anions Trifluoromethyl compounds Oxonium compounds
Brookhart's acid
[ "Chemistry" ]
1,070
[ "Coordination chemistry", "Acids", "Non-coordinating anions" ]
37,308,162
https://en.wikipedia.org/wiki/Station%20P%20%28ocean%20measurement%20site%29
Station P is an ocean measurement site, located at 50 degrees north latitude, 145 degrees west longitude (water depth, 4220 meters). The site was established by the US Navy in 1943. In 1951, US funding to maintain continual presence ran out and observational responsibility was passed to Canada. The site was staffed continuously until 1981. Starting in 2007, automated observations have been made by the National Oceanic and Atmospheric Administration. References Oceanography Meteorological data and networks
Station P (ocean measurement site)
[ "Physics", "Environmental_science" ]
93
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
38,743,125
https://en.wikipedia.org/wiki/Michigan%20Spin%20Physics%20Center
The University of Michigan Spin Physics Center focuses on studies of spin effects in high polarized proton-proton elastic and inelastic scattering. These polarized scattering experiments use the world-class solid and jet polarized proton targets, which are developed, upgraded and tested at the center. The Center obtained a record density of about 1012 spin-polarized hydrogen atoms per cm3. The center also led the development of the world's first accelerated polarized beams at the 12 GeV Argonne ZGS (in 1973) and then at the 28 GeV Brookhaven AGS. The Center led pioneering experiments at the IUCF Cooler Ring from 1988 until its 2003 shutdown, which developed and tested Siberian snakes and Spin-flippers, which are now used to accelerate, store and use high energy polarized proton beams. The center also leads the International SPIN Collaboration and its proton polarization know-how is used in many experiments worldwide. Main discoveries In 1978 the Center found that protons with parallel spins interact much stronger than protons with anti-parallel spin. According to Quantum Chromodynamics the interaction between parallel and anti-parallel spinning proton beams should be the same. Sheldon Glashow called this effect "the thorn in the side of QCD". This effect remained unexplained until today. In 2005 Stanley Brodsky called it "one of the unsolved mysteries in hadronic physics". References External links Michigan Spin Physics Center Quantum chromodynamics Particle experiments Unsolved problems in physics University of Michigan
Michigan Spin Physics Center
[ "Physics" ]
317
[ "Unsolved problems in physics" ]
38,749,092
https://en.wikipedia.org/wiki/Graphene%20antenna
A graphene antenna is a high-frequency antenna based on graphene, a one atom thick two dimensional carbon crystal, designed to enhance radio communications. The unique structure of graphene would enable these enhancements. Ultimately, the choice of graphene for the basis of this nano antenna was due to the behavior of electrons. Antenna It would be unfeasible to simply reduce traditional metallic antennas to nano sizes, because they would require tremendously high frequencies to operate. Consequently, it would require a lot of power to operate them. Furthermore, electrons in these traditional metals are not very mobile at nano sizes and the necessary electromagnetic waves would not form. However, these limitations would not be an issue with graphene's unique capabilities. A flake of graphene has the potential to hold a series of metal electrodes. Consequently, it would be possible to develop an antenna from this material. Electron behavior Graphene has a unique structure, wherein, electrons are able to move with minimal resistance. This enables electricity to move at a much faster speed than in metal, which is used for current antennas. Furthermore, as the electrons oscillate, they create an electromagnetic wave atop the graphene layer, referred to as the surface plasmon polariton wave. This would enable the antenna to operate at the lower end of the terahertz frequency, which would be more efficient than the current copper based antennas. Ultimately, researchers envision that graphene will be able to break through the limitations of current antennas. Properties It has been estimated that speeds of up to terabits per second can be achieved using such a device. Traditional antennas would require very high frequencies to operate at nano scales, making it an unfeasible option. However, the unique slower movement of electrons in graphene would enable it to operate at lower frequencies making it a feasible option for a nano sized antenna. Projects Oak Ridge National Laboratory Researchers from the Department of Energy’s Oak Ridge National Laboratory (ORNL) have discovered a unique way to create an atomic antenna. Two sheets of graphene can be connected by a silicon wire that is approximately 0.1 nanometer in diameter. This is approximately 100 times smaller than current metal based wires, which can only be reduced to 50 nanometers. This silicon wire however, is a plasmotic device, which would enable the formation of surface plasmon polariton waves required to operate this nano antenna. Samsung Samsung has funded $120,000 for research into the graphene antenna to a team of researchers from the Georgia Institute of Technology and the Polytechnic University of Catalonia. Their research has shown that graphene is a feasible material to make nano antennas with. They have simulated how the electrons would behave, and have confirmed that surface plasmon polariton waves should form. This wave is essential for the graphene antenna to operate at the low end of the terahertz range, making it more efficient than traditional antenna designs. Researchers are currently working on implementing their research, and finding a way to propagate the electromagnetic waves necessary to operate the antenna. Their findings were published in the IEEE Journal on Selected Areas in Communications. University of Manchester A collaboration between the University of Manchester and an industrial partner developed a new way to manufacture graphene antennas for radio-frequency identification. The antennas are paper-based, flexible and environmentally friendly. Their findings were published in Applied Physics Letters and are being commercialised by Graphene Security. See also Ian F. Akyildiz Metal-insulator-graphene (MIG) Nanoelectronics Nanowire Optical rectenna References External links Graphene Antennas Nanoelectronics
Graphene antenna
[ "Materials_science", "Engineering" ]
739
[ "Nanotechnology", "Antennas", "Telecommunications engineering", "Nanoelectronics" ]
29,214,482
https://en.wikipedia.org/wiki/Colorimetric%20analysis
Colorimetric analysis is a method of determining the concentration of a chemical element or chemical compound in a solution with the aid of a color reagent. It is applicable to both organic compounds and inorganic compounds and may be used with or without an enzymatic stage. The method is widely used in medical laboratories and for industrial purposes, e.g. the analysis of water samples in connection with industrial water treatment. Equipment The equipment required is a colorimeter, some cuvettes and a suitable color reagent. The process may be automated, e.g. by the use of an AutoAnalyzer or by flow injection analysis. Recently, colorimetric analyses developed for colorimeters have been adapted for use with plate readers to speed up analysis and reduce the waste stream. Non-enzymatic methods Examples Calcium Calcium + o-cresolphthalein complexone → colored complex Copper Copper + bathocuproin disulfonate → colored complex Creatinine Creatinine + picrate → colored complex Iron Iron + bathophenanthroline disulfonate → colored complex Phosphate (inorganic) Phosphate + ammonium molybdate + ascorbic acid → blue colored complex Enzymatic methods In enzymatic analysis (which is widely used in medical laboratories) the color reaction is preceded by a reaction catalyzed by an enzyme. As the enzyme is specific to a particular substrate, more accurate results can be obtained. Enzymatic analysis is always carried out in a buffer solution at a specified temperature (usually 37°C) to provide the optimum conditions for the enzymes to act. Examples follow. Examples Cholesterol (CHOD-PAP method) Cholesterol + oxygen --(enzyme cholesterol oxidase)--> cholestenone + hydrogen peroxide Hydrogen peroxide + 4-aminophenazone + phenol --(enzyme peroxidase)--> colored complex + water Glucose (GOD-Perid method) Glucose + oxygen + water --(enzyme glucose oxidase)--> gluconate + hydrogen peroxide Hydrogen peroxide + ABTS --(enzyme peroxidase)--> colored complex In this case, both stages of the reaction are catalyzed by enzymes. Triglycerides (GPO-PAP method) Triglycerides + water --(enzyme esterase)--> glycerol + carboxylic acid Glycerol + ATP --(enzyme glycerol kinase)--> glycerol-3-phosphate + ADP Glycerol-3-phosphate + oxygen --(enzyme glycerol-3-phosphate oxidase) --> dihydroxyacetone phosphate + hydrogen peroxide Hydrogen peroxide + 4-aminophenazone + 4-chlorophenol --(enzyme peroxidase)--> colored complex Urea Urea + water --(enzyme urease)--> ammonium carbonate Ammonium carbonate + phenol + hypochlorite ----> colored complex In this case, only the first stage of the reaction is catalyzed by an enzyme. The second stage is non-enzymatic. Abbreviations CHOD = cholesterol oxidase GOD = glucose oxidase GPO = glycerol-3-phosphate oxidase PAP = phenol + aminophenazone (in some methods the phenol is replaced by 4-chlorophenol, which is less toxic) POD = peroxidase Ultraviolet methods In ultraviolet (UV) methods there is no visible color change but the principle is exactly the same, i.e. the measurement of a change in the absorbance of the solution. UV methods usually measure the difference in absorbance at 340 nm wavelength between nicotinamide adenine dinucleotide (NAD) and its reduced form (NADH). Examples Pyruvate Pyruvate + NADH --(enzyme lactate dehydrogenase)--> L-lactate + NAD See also Blood sugar MBAS assay, an assay that indicates anionic surfactants in water with a bluing reaction. Nessler cylinder References Analytical chemistry Chemical reactions Absorption spectroscopy
Colorimetric analysis
[ "Physics", "Chemistry" ]
898
[ "nan", "Spectroscopy", "Spectrum (physical sciences)", "Absorption spectroscopy" ]
29,217,860
https://en.wikipedia.org/wiki/Operational%20Street%20Pollution%20Model
The Operational Street Pollution Model (OSPM) is an atmospheric dispersion model for simulating the dispersion of air pollutants in so-called street canyons. It was developed by the National Environmental Research Institute of Denmark, Department of Atmospheric Environment, Aarhus University. As a result of reorganisation at Aarhus University the model has been maintained by the Department of Environmental Science at Aarhus University since 2011. For about 20 years, OSPM has been used in many countries for studying traffic pollution, performing analyses of field campaign measurements, studying efficiency of pollution abatement strategies, carrying out exposure assessments and as reference in comparisons to other models. OSPM is generally considered as state-of-the-art in practical street pollution modelling. Description In OSPM concentrations of traffic-emitted pollution is calculated using a combination of a plume model for the direct contribution and a box model for the recirculating part of the pollutants in the street. The NO2 concentrations are calculated taking into account NO-NO2-O3 chemistry and the residence time of pollutants in the street. The model is designed to work with input and output in the form of one-hour averages. The main principles in the model are depicted in Figure 1 for the case of a wind direction nearly perpendicular to the street canyon. A receptor point in leeward position is affected by the direct plume showing considerably higher concentrations than a receptor in windward position being exposed to the less concentrated recirculating air. The turbulence produced by the moving traffic (TPT) is acting in addition to the turbulence created by the roof level wind. This leads to a faster dispersion of the direct plume but also to an improved air exchange at roof level between the street canyon and the background air. See also List of atmospheric dispersion models Further reading For those who are unfamiliar with air pollution dispersion modelling and would like to learn more about the subject, it is suggested that either one of the following books be read: www.crcpress.com www.air-dispersion.com References External links OSPM home page Atmospheric dispersion modeling
Operational Street Pollution Model
[ "Chemistry", "Engineering", "Environmental_science" ]
431
[ "Atmospheric dispersion modeling", "Environmental modelling", "Environmental engineering" ]
29,218,464
https://en.wikipedia.org/wiki/Frobenius%20manifold
In the mathematical field of differential geometry, a Frobenius manifold, introduced by Dubrovin, is a flat Riemannian manifold with a certain compatible multiplicative structure on the tangent space. The concept generalizes the notion of Frobenius algebra to tangent bundles. Frobenius manifolds occur naturally in the subject of symplectic topology, more specifically quantum cohomology. The broadest definition is in the category of Riemannian supermanifolds. We will limit the discussion here to smooth (real) manifolds. A restriction to complex manifolds is also possible. Definition Let M be a smooth manifold. An affine flat structure on M is a sheaf Tf of vector spaces that pointwisely span TM the tangent bundle and the tangent bracket of pairs of its sections vanishes. As a local example consider the coordinate vectorfields over a chart of M. A manifold admits an affine flat structure if one can glue together such vectorfields for a covering family of charts. Let further be given a Riemannian metric g on M. It is compatible to the flat structure if g(X, Y) is locally constant for all flat vector fields X and Y. A Riemannian manifold admits a compatible affine flat structure if and only if its curvature tensor vanishes everywhere. A family of commutative products * on TM is equivalent to a section A of S2(T*M) ⊗ TM via We require in addition the property Therefore, the composition g#∘A is a symmetric 3-tensor. This implies in particular that a linear Frobenius manifold (M, g, *) with constant product is a Frobenius algebra M. Given (g, Tf, A), a local potential Φ is a local smooth function such that for all flat vector fields X, Y, and Z. A Frobenius manifold (M, g, *) is now a flat Riemannian manifold (M, g) with symmetric 3-tensor A that admits everywhere a local potential and is associative. Elementary properties The associativity of the product * is equivalent to the following quadratic PDE in the local potential Φ where Einstein's sum convention is implied, Φ,a denotes the partial derivative of the function Φ by the coordinate vectorfield ∂/∂xa which are all assumed to be flat. gef are the coefficients of the inverse of the metric. The equation is therefore called associativity equation or Witten–Dijkgraaf–Verlinde–Verlinde (WDVV) equation. Examples Beside Frobenius algebras, examples arise from quantum cohomology. Namely, given a semipositive symplectic manifold (M, ω) then there exists an open neighborhood U of 0 in its even quantum cohomology QHeven(M, ω) with Novikov ring over C such that the big quantum product *a for a in U is analytic. Now U together with the intersection form g = <·,·> is a (complex) Frobenius manifold. The second large class of examples of Frobenius manifolds come from the singularity theory. Namely, the space of miniversal deformations of an isolated singularity has a Frobenius manifold structure. This Frobenius manifold structure also relates to Kyoji Saito's primitive forms. References 2. Yu.I. Manin, S.A. Merkulov: Semisimple Frobenius (super)manifolds and quantum cohomology of Pr, Topol. Methods in Nonlinear Analysis 9 (1997), pp. 107–161 Symplectic topology Riemannian manifolds Integrable systems Algebraic geometry
Frobenius manifold
[ "Physics", "Mathematics" ]
765
[ "Integrable systems", "Theoretical physics", "Space (mathematics)", "Metric spaces", "Riemannian manifolds", "Fields of abstract algebra", "Algebraic geometry" ]
29,226,569
https://en.wikipedia.org/wiki/Ensto
Ensto is an international technology company and a family business, which designs and provides electrical solutions for electricity distribution networks, buildings, marine and electric traffic. Ensto manufactures, for example, solutions for overhead line and underground cable networks, luminaires, electric vehicle charging systems, electric heaters, control systems and enclosing systems. Ensto was established by Ensio Miettinen in 1958. Since 2001, it has been owned by four of his descendants through the parent company EM Group. In 2018, the majority of the ownership was transferred to the third generation of the Miettinen family through the parent company Ensto Invest Oy. The headquarters of Ensto is in Porvoo, Finland, where the company was founded. References External links Manufacturing companies of Finland Electrical engineering companies Engineering companies of Finland Family-owned companies Porvoo
Ensto
[ "Engineering" ]
172
[ "Electrical engineering companies", "Electrical engineering organizations", "Engineering companies" ]
41,527,719
https://en.wikipedia.org/wiki/Stationary-wave%20integrated%20Fourier-transform%20spectrometry
Stationary-wave integrated Fourier-transform spectrometry (SWIFTS), or standing-wave integrated Fourier-transform spectrometry, is an analytical technique used for measuring the distribution of light across an optical spectrum. SWIFTS technology is based on a near-field Lippmann architecture. An optical signal is injected into a waveguide and ended by a mirror (true Lippman configuration). The input signal interferes with the reflected signal, creating a standing, or stationary, wave. In a counter-propagative architecture, the two optical signals are injected at the opposite ends of the waveguide. The evanescent waves propagating within the waveguide are then sampled by optical probes. This results in an interferogram. A mathematical function known as a Lippmann transform, similar to a Fourier transform, is later used to give the spectrum of the light. History In 1891, at the Académie des Sciences in Paris, Gabriel Lippmann presented a colour photograph of the Sun's spectrum obtained with his new photographic plate. Later, in 1894, he published an article on how his plate was able to record colour information in the depth of photographic grainless gelatin and how the same plate after processing could restore the original colour image merely through light reflection. He was thus the inventor of true interferential colour photography. He received the Nobel Prize in Physics in 1908 for this breakthrough. Unfortunately, this principle was too complex to use. The method was abandoned a few years after its discovery. One aspect of the Lippmann concept that was ignored at that time relates to spectroscopic applications. Early in 1933, Herbert E. Ives proposed to use a photoelectric device to probe stationary waves to make spectrometric measurements. In 1995, P. Connes proposed to use the emerging new technology of detectors for three-dimensional Lippmann-based spectrometry. Following this, a first realization of a very compact spectrometer based on a microoptoelectromechanical system (MOEMS) was reported by Knipp et al. in 2005, but it had a very limited spectral resolution. In 2004, two French researchers, Etienne Le Coarer from Joseph Fourier University and Pierre Benech from INP Grenoble, coupled sensing elements to the evanescent part of standing waves within a single-mode waveguide. In 2007, those two researchers reported a near-field method to probe the interferogram within a waveguide. The first SWIFTS-based spectrometers appeared in 2011 based on a SWIFTS linear configuration. Technology principle The technology works by probing an optical standing wave, or the sum of the standing waves in the case of polychromatic light, created by a light to be analyzed. In a SWIFTS linear configuration (true Lippman configuration), the stationary wave is created by a single-mode waveguide ended by a fixed mirror. The stationary wave is regularly sampled on one side of a waveguide using nano-scattering dots. These dots are located in the evanescent field. These nanodots are characterized by an optical index difference with the medium in which the evanescent field is located. The light is then scattered around an axis perpendicular to the waveguide. For each dot, this scattered light is detected by a pixel aligned with this axis. The intensity detected is therefore proportional to the intensity inside the waveguide at the exact location of the dot. This results in a linear image of the interferogram. No moving parts are used. A mathematical function known as a Lippmann transform, similar to a Fourier transform, is then applied to this linear image and gives the spectrum of the light. The interferogram is truncated. Only the frequencies corresponding to the zero optical path difference at the mirror, up to the farthest dots are sampled. Higher frequencies are rejected. This interferogram’s truncation determines the spectral resolution. The interferogram is undersampled. A consequence of this under-sampling is a limitation of the wavelength bandwidth to which the mathematical function is applied. SWIFTS technology displays the Fellgett's advantage, which is derived from the fact that an interferometer measures wavelengths simultaneously with the same elements of the detector, whereas a dispersive spectrometer measures them successively. Fellgett's advantage also states that when collecting a spectrum whose measurement noise is dominated by detector noise, a multiplex spectrometer such as a Fourier-transform spectrometer will produce a relative improvement in the signal-to-noise ratio, with respect to an equivalent scanning monochromator, that is approximately equal to the square root of the number of sample points comprising the spectrum. The Connes advantage states that the wavenumber scale of an interferometer, derived from a helium–neon laser, is more accurate and boasts better long-term stability than the calibration of dispersive instruments. References Spectroscopy Fourier analysis
Stationary-wave integrated Fourier-transform spectrometry
[ "Physics", "Chemistry" ]
1,014
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
41,528,945
https://en.wikipedia.org/wiki/Metachirality
Metachirality is a stronger form of chirality. It applies to objects or systems that are chiral (not identical to their mirror image) and where, in addition, their mirror image has a symmetry group that differs from the symmetry group of the original object or system. Many familiar chiral objects, like the capital letter 'Z' embedded in the plane, are not metachiral. The symmetry group of the capital letter 'Z' embedded in the plane consists of the identity transformation and a rotation over 180˚ (a half turn). In this case, the mirror image has the same symmetry group. In particular, asymmetric objects (that only have the identity transformation as symmetry, like a human hand) are not metachiral, since the mirror image is also asymmetric. In general, two-dimensional objects and bounded three-dimensional objects are not metachiral. An example of a metachiral object is an infinite helical staircase. A helix in 3D has a handedness (either left or right, like screw thread), whereby it differs from its mirror image. An infinite helical staircase, however, does have symmetries: screw operations, that is, a combination of a translation and a rotation. The symmetry group of the mirror image of an infinite helical staircase also contains screw operations. But they are of the opposite handedness and, hence, the symmetry groups differ. Note, however, that these symmetry groups are isomorphic. Of the 219 space groups, 11 are metachiral. A nice example of a metachiral spatial structure is the K4 crystal, also known as Triamond, and featured in the Bamboozle mathematical artwork. See also Orientation (mathematics) Stereochemistry Right-hand rule Handedness Asymmetry References Chirality
Metachirality
[ "Physics", "Chemistry", "Biology" ]
370
[ "Pharmacology", "Origin of life", "Stereochemistry", "Chirality", "Stereochemistry stubs", "Asymmetry", "Biochemistry", "Symmetry", "Biological hypotheses" ]
2,118,467
https://en.wikipedia.org/wiki/BS%207671
British Standard BS 7671 "Requirements for Electrical Installations. IET Wiring Regulations", informally called in the UK electrical community "The Regs", is the national standard in the United Kingdom for electrical installation and the safety of electrical wiring systems. It did not become a recognized British Standard until after the publication of the 16th edition in 1992. The standard takes account of the technical substance of agreements reached in CENELEC. BS 7671 is also used as a national standard by Mauritius, St Lucia, Saint Vincent and the Grenadines, Sierra Leone, Singapore, Sri Lanka, Trinidad and Tobago, Uganda, Cyprus, and several other countries, which base their wiring regulations on it. The latest version is BS 7671:2018+A3:2024 (18th Edition, amendment 3) issued in 2024. Scope Locations The regulations in BS 7671 apply to the design, selection, erection and verification of electrical installations such as those of: residential premises commercial premises public premises industrial premises prefabricated buildings low voltage generating sets highway equipment and street furniture locations containing a bath or shower swimming pools and other basins rooms and cabins containing sauna heaters construction and demolition sites agricultural and horticultural premises conducting locations with restricted movement caravan / camping parks and similar locations marinas and similar locations medical locations exhibitions, shows and stands solar photovoltaic (PV) power supply systems outdoor lighting extra-low voltage lighting mobile or transportable units caravans and motor caravans electric vehicle charging operating and maintenance gangways temporary installations for structures, amusement devices and booths at fairgrounds, amusement parks and circuses including professional stage and broadcast applications floor and ceiling heating systems onshore units of electrical shore connections for inland navigation vessels. 'Premises' covers the land and all facilities including buildings belonging to it. Exclusions: systems for the distribution of electricity to the public other than prosumer's installations covered by Chapter 82 railway traction equipment, rolling stock and signalling equipment equipment of motor vehicles, except those to which the requirements of the Regulations concerning caravans or mobile units are applicable equipment on board ships covered by BS 8450, BS EN 60092-507, BS EN ISO 13297 or BS EN ISO 10133 equipment of mobile and fixed offshore installations equipment of aircraft those aspects of mines specifically covered by Statutory Regulations radio interference suppression equipment, except so far as it affects safety of the electrical installation lightning protection systems for buildings and structures covered by BS EN 62305 those aspects of lift installations covered by relevant parts of BS 5655 and BS EN 81 and those aspects of escalator or moving walk installations covered by relevant parts of BS 5656 and BS EN 115 electrical equipment of machines covered by BS EN 60204 electric fences covered by BS EN 60335-2-76 the DC side of cathodic protection systems complying with the relevant part(s) of BS EN ISO 12696, BS EN 12954, BS EN ISO 13174, BS EN 13636 and BS EN 14505. Supply characteristics BS 7671 only covers electrical systems with the following characteristics: having a nominal voltage up to but not exceeding 1000V AC or 1500V DC for AC having a supply frequency of 50, 60 or 400Hz, though the use of other frequencies for special purposes is not excluded. This includes low-voltage installations, as found in most domestic and commercial properties, and extra-low-voltage systems, but excludes high voltage, as found in generation, transmission and distribution networks. Compilation and publication The standard is maintained by the Joint IET/BSI Technical Committee JPEL/64, the UK National Committee for Wiring Regulations, and published jointly by the IET (formerly IEE) and BSI. Although the IET and BSI are non-governmental organisations and the Wiring Regulations are non-statutory, they are referenced in several UK statutory instruments, and in most cases, for practical purposes, have legal force as the appropriate method of electric wiring. The BSI (British Standards Institute) publishes numerous titles concerning acceptable standards of design/safety/quality across different fields. History of BS 7671 and predecessor standards The first edition was published in 1882 as the "Rules and Regulations for the Prevention of Fire Risks arising from Electric Lighting." The title became "General Rules recommended for Wiring for the Supply of Electrical Energy" with the third edition in 1897, "Wiring Rules" with the fifth edition of 1907, and settled at "Regulations for the Electrical Equipment of Buildings" with the eighth edition in 1924. Since the 15th edition (1981), these regulations have closely followed the corresponding international standard IEC 60364. In 1992, the IEE Wiring Regulations became British Standard BS 7671 so that the legal enforcement of their requirements was easier both with regard to the Electricity at Work regulations and from an international point of view. They are now treated similar to other British Standards. BS 7671 has converged towards (and is largely based on) the European Committee for Electrotechnical Standardization (CENELEC) harmonisation documents, and therefore is technically very similar to the current wiring regulations of other European countries. Timeline The historical timeline of publication can be found within documents published by the IET, such as within the PDF detailing amendment 3 to the 18th edition (), and is summarised below, along with some notable other events. Only major changes between editions/amendments are noted. See also British Standards Electrical code Electrical wiring Electrical wiring (UK) IEC 60364 Earthing system References External links Wiring Regulations 07671 Electrical safety in the United Kingdom Electrical standards Electrical wiring
BS 7671
[ "Physics", "Engineering" ]
1,134
[ "Electrical standards", "Electrical systems", "Building engineering", "Physical systems", "Electrical engineering", "Electrical wiring" ]
2,119,193
https://en.wikipedia.org/wiki/Topological%20manifold
In topology, a topological manifold is a topological space that locally resembles real n-dimensional Euclidean space. Topological manifolds are an important class of topological spaces, with applications throughout mathematics. All manifolds are topological manifolds by definition. Other types of manifolds are formed by adding structure to a topological manifold (e.g. differentiable manifolds are topological manifolds equipped with a differential structure). Every manifold has an "underlying" topological manifold, obtained by simply "forgetting" the added structure. However, not every topological manifold can be endowed with a particular additional structure. For example, the E8 manifold is a topological manifold which cannot be endowed with a differentiable structure. Formal definition A topological space X is called locally Euclidean if there is a non-negative integer n such that every point in X has a neighborhood which is homeomorphic to real n-space Rn. A topological manifold is a locally Euclidean Hausdorff space. It is common to place additional requirements on topological manifolds. In particular, many authors define them to be paracompact or second-countable. In the remainder of this article a manifold will mean a topological manifold. An n-manifold will mean a topological manifold such that every point has a neighborhood homeomorphic to Rn. Examples n-manifolds The real coordinate space Rn is an n-manifold. Any discrete space is a 0-dimensional manifold. A circle is a compact 1-manifold. A torus and a Klein bottle are compact 2-manifolds (or surfaces). The n-dimensional sphere Sn is a compact n-manifold. The n-dimensional torus Tn (the product of n circles) is a compact n-manifold. Projective manifolds Projective spaces over the reals, complexes, or quaternions are compact manifolds. Real projective space RPn is a n-dimensional manifold. Complex projective space CPn is a 2n-dimensional manifold. Quaternionic projective space HPn is a 4n-dimensional manifold. Manifolds related to projective space include Grassmannians, flag manifolds, and Stiefel manifolds. Other manifolds Differentiable manifolds are a class of topological manifolds equipped with a differential structure. Lens spaces are a class of differentiable manifolds that are quotients of odd-dimensional spheres. Lie groups are a class of differentiable manifolds equipped with a compatible group structure. The E8 manifold is a topological manifold which cannot be given a differentiable structure. Properties The property of being locally Euclidean is preserved by local homeomorphisms. That is, if X is locally Euclidean of dimension n and f : Y → X is a local homeomorphism, then Y is locally Euclidean of dimension n. In particular, being locally Euclidean is a topological property. Manifolds inherit many of the local properties of Euclidean space. In particular, they are locally compact, locally connected, first countable, locally contractible, and locally metrizable. Being locally compact Hausdorff spaces, manifolds are necessarily Tychonoff spaces. Adding the Hausdorff condition can make several properties become equivalent for a manifold. As an example, we can show that for a Hausdorff manifold, the notions of σ-compactness and second-countability are the same. Indeed, a Hausdorff manifold is a locally compact Hausdorff space, hence it is (completely) regular. Assume such a space X is σ-compact. Then it is Lindelöf, and because Lindelöf + regular implies paracompact, X is metrizable. But in a metrizable space, second-countability coincides with being Lindelöf, so X is second-countable. Conversely, if X is a Hausdorff second-countable manifold, it must be σ-compact. A manifold need not be connected, but every manifold M is a disjoint union of connected manifolds. These are just the connected components of M, which are open sets since manifolds are locally-connected. Being locally path connected, a manifold is path-connected if and only if it is connected. It follows that the path-components are the same as the components. The Hausdorff axiom The Hausdorff property is not a local one; so even though Euclidean space is Hausdorff, a locally Euclidean space need not be. It is true, however, that every locally Euclidean space is T1. An example of a non-Hausdorff locally Euclidean space is the line with two origins. This space is created by replacing the origin of the real line with two points, an open neighborhood of either of which includes all nonzero numbers in some open interval centered at zero. This space is not Hausdorff because the two origins cannot be separated. Compactness and countability axioms A manifold is metrizable if and only if it is paracompact. The long line is an example a normal Hausdorff 1-dimensional topological manifold that is not metrizable nor paracompact. Since metrizability is such a desirable property for a topological space, it is common to add paracompactness to the definition of a manifold. In any case, non-paracompact manifolds are generally regarded as pathological. An example of a non-paracompact manifold is given by the long line. Paracompact manifolds have all the topological properties of metric spaces. In particular, they are perfectly normal Hausdorff spaces. Manifolds are also commonly required to be second-countable. This is precisely the condition required to ensure that the manifold embeds in some finite-dimensional Euclidean space. For any manifold the properties of being second-countable, Lindelöf, and σ-compact are all equivalent. Every second-countable manifold is paracompact, but not vice versa. However, the converse is nearly true: a paracompact manifold is second-countable if and only if it has a countable number of connected components. In particular, a connected manifold is paracompact if and only if it is second-countable. Every second-countable manifold is separable and paracompact. Moreover, if a manifold is separable and paracompact then it is also second-countable. Every compact manifold is second-countable and paracompact. Dimensionality By invariance of domain, a non-empty n-manifold cannot be an m-manifold for n ≠ m. The dimension of a non-empty n-manifold is n. Being an n-manifold is a topological property, meaning that any topological space homeomorphic to an n-manifold is also an n-manifold. Coordinate charts By definition, every point of a locally Euclidean space has a neighborhood homeomorphic to an open subset of . Such neighborhoods are called Euclidean neighborhoods. It follows from invariance of domain that Euclidean neighborhoods are always open sets. One can always find Euclidean neighborhoods that are homeomorphic to "nice" open sets in . Indeed, a space M is locally Euclidean if and only if either of the following equivalent conditions holds: every point of M has a neighborhood homeomorphic to an open ball in . every point of M has a neighborhood homeomorphic to itself. A Euclidean neighborhood homeomorphic to an open ball in is called a Euclidean ball. Euclidean balls form a basis for the topology of a locally Euclidean space. For any Euclidean neighborhood U, a homeomorphism is called a coordinate chart on U (although the word chart is frequently used to refer to the domain or range of such a map). A space M is locally Euclidean if and only if it can be covered by Euclidean neighborhoods. A set of Euclidean neighborhoods that cover M, together with their coordinate charts, is called an atlas on M. (The terminology comes from an analogy with cartography whereby a spherical globe can be described by an atlas of flat maps or charts). Given two charts and with overlapping domains U and V, there is a transition function Such a map is a homeomorphism between open subsets of . That is, coordinate charts agree on overlaps up to homeomorphism. Different types of manifolds can be defined by placing restrictions on types of transition maps allowed. For example, for differentiable manifolds the transition maps are required to be smooth. Classification of manifolds Discrete spaces (0-Manifold) A 0-manifold is just a discrete space. A discrete space is second-countable if and only if it is countable. Curves (1-Manifold) Every nonempty, paracompact, connected 1-manifold is homeomorphic either to R or the circle. Surfaces (2-Manifold) Every nonempty, compact, connected 2-manifold (or surface) is homeomorphic to the sphere, a connected sum of tori, or a connected sum of projective planes. Volumes (3-Manifold) A classification of 3-manifolds results from Thurston's geometrization conjecture, proven by Grigori Perelman in 2003. More specifically, Perelman's results provide an algorithm for deciding if two three-manifolds are homeomorphic to each other. General n-manifold The full classification of n-manifolds for n greater than three is known to be impossible; it is at least as hard as the word problem in group theory, which is known to be algorithmically undecidable. In fact, there is no algorithm for deciding whether a given manifold is simply connected. There is, however, a classification of simply connected manifolds of dimension ≥ 5. Manifolds with boundary A slightly more general concept is sometimes useful. A topological manifold with boundary is a Hausdorff space in which every point has a neighborhood homeomorphic to an open subset of Euclidean half-space (for a fixed n): Every topological manifold is a topological manifold with boundary, but not vice versa. Constructions There are several methods of creating manifolds from other manifolds. Product manifolds If M is an m-manifold and N is an n-manifold, the Cartesian product M×N is a (m+n)-manifold when given the product topology. Disjoint union The disjoint union of a countable family of n-manifolds is a n-manifold (the pieces must all have the same dimension). Connected sum The connected sum of two n-manifolds is defined by removing an open ball from each manifold and taking the quotient of the disjoint union of the resulting manifolds with boundary, with the quotient taken with regards to a homeomorphism between the boundary spheres of the removed balls. This results in another n-manifold. Submanifold Any open subset of an n-manifold is an n-manifold with the subspace topology. Footnotes References External links Manifolds Properties of topological spaces
Topological manifold
[ "Mathematics" ]
2,217
[ "Properties of topological spaces", "Space (mathematics)", "Topological spaces", "Topology", "Manifolds" ]
2,120,402
https://en.wikipedia.org/wiki/Coastal%20morphodynamics
Coastal morphodynamics refers to the study of the interaction and adjustment of the seafloor topography and fluid hydrodynamic processes, seafloor morphologies, and sequences of change dynamics involving the motion of sediment. Hydrodynamic processes include those of waves, tides and wind-induced currents. Anthropogenic climate change is causing changes in the coastal changes and processes that are interconnected with those caused by natural processes. While hydrodynamic processes respond instantaneously to morphological change, morphological change requires the redistribution of sediment. As sediment takes a finite time to move, there is a lag in the morphological response to hydrodynamic forcing. Sediment can therefore be considered to be a time-dependent coupling mechanism. Since the boundary conditions of hydrodynamic forcing change regularly, this may mean that the beach never attains equilibrium. Morphodynamic processes exhibit positive and negative feedbacks (such that beaches can, over different timescales, be considered to be both self-forcing and self-organised systems), nonlinearities and threshold behaviour. This systems approach to the coast was first developed by Wright and Thom in 1977 and finalized by Wright and Short in 1984. According to their dynamic and morphological characteristics, exposed sandy beaches can be classified into several morphodynamic types (Wright and Short, 1984; Short, 1996). There is a large scale of morphodynamic states, this scale ranges from the "dissipative state" to the "reflective extremes". Dissipative beaches are flat, have fine sand, incorporating waves that tend to break far from the intertidal zone and dissipate force progressively along wide surf zones. Dissipative beaches are wide and flat in profile, with a wide shoaling and surf zone, composed of finer sediment, and characterised by spilling breakers. Reflective beaches are steep, and are known for their coarse sand; they have no surf zone, and the waves break brusquely on the intertidal zone. Reflective beaches are typically steep in profile with a narrow shoaling and surf zone, composed of coarse sediment, and characterised by surging breakers. Coarser sediment allows percolation during the swash part of the wave cycle, thus reducing the strength of backwash and allowing material to be deposited in the swash zone Depending on beach state, near bottom currents show variations in the relative dominance of motions due to: incident waves, subharmonic oscillations, infragravity oscillations, and mean longshore and rip currents. On reflective beaches, incident waves and subharmonic edge waves are dominant. In highly dissipative surf zones, shoreward decay of incident waves is accompanied by shoreward growth of infragravity energy; in the inner surf zone, currents associated with infragravity standing waves dominate. On intermediate states with pronounced bar-trough (straight or crescentic) topographies, incident wave orbital velocities are generally dominant but significant roles are also played by subharmonic and infragravity standing waves, longshore currents, and rips. The strongest rips and associated feeder currents occur in association with intermediate transverse bar and rip topographies. Transitions between beach states are often caused by changes in wave energy, with storms causing reflective beach profiles to flatten (offshore movement of sediment under steeper waves), thus adopting a more dissipative profile. Morphodynamic processes are also associated with other coastal landforms, for example spur and groove formation topography on coral reefs and tidal flats in infilling estuaries. See also Beach nourishment References Bibliography Wright, L.D., Short, A.D., 1984. "Morphodynamic variability of surf zones and beaches: a synthesis". Marine Geology, 56, 93-118. Short, A.D., 1999. Handbook of Beach and Shoreface Morphodyanmics. West Sussex, UK: Wiley, 379pp. Beaches Coastal geography Physical oceanography
Coastal morphodynamics
[ "Physics" ]
831
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
2,121,149
https://en.wikipedia.org/wiki/Basis%20set%20%28chemistry%29
In theoretical and computational chemistry, a basis set is a set of functions (called basis functions) that is used to represent the electronic wave function in the Hartree–Fock method or density-functional theory in order to turn the partial differential equations of the model into algebraic equations suitable for efficient implementation on a computer. The use of basis sets is equivalent to the use of an approximate resolution of the identity: the orbitals are expanded within the basis set as a linear combination of the basis functions , where the expansion coefficients are given by . The basis set can either be composed of atomic orbitals (yielding the linear combination of atomic orbitals approach), which is the usual choice within the quantum chemistry community; plane waves which are typically used within the solid state community, or real-space approaches. Several types of atomic orbitals can be used: Gaussian-type orbitals, Slater-type orbitals, or numerical atomic orbitals. Out of the three, Gaussian-type orbitals are by far the most often used, as they allow efficient implementations of post-Hartree–Fock methods. Introduction In modern computational chemistry, quantum chemical calculations are performed using a finite set of basis functions. When the finite basis is expanded towards an (infinite) complete set of functions, calculations using such a basis set are said to approach the complete basis set (CBS) limit. In this context, basis function and atomic orbital are sometimes used interchangeably, although the basis functions are usually not true atomic orbitals. Within the basis set, the wavefunction is represented as a vector, the components of which correspond to coefficients of the basis functions in the linear expansion. In such a basis, one-electron operators correspond to matrices (a.k.a. rank two tensors), whereas two-electron operators are rank four tensors. When molecular calculations are performed, it is common to use a basis composed of atomic orbitals, centered at each nucleus within the molecule (linear combination of atomic orbitals ansatz). The physically best motivated basis set are Slater-type orbitals (STOs), which are solutions to the Schrödinger equation of hydrogen-like atoms, and decay exponentially far away from the nucleus. It can be shown that the molecular orbitals of Hartree–Fock and density-functional theory also exhibit exponential decay. Furthermore, S-type STOs also satisfy Kato's cusp condition at the nucleus, meaning that they are able to accurately describe electron density near the nucleus. However, hydrogen-like atoms lack many-electron interactions, thus the orbitals do not accurately describe electron state correlations. Unfortunately, calculating integrals with STOs is computationally difficult and it was later realized by Frank Boys that STOs could be approximated as linear combinations of Gaussian-type orbitals (GTOs) instead. Because the product of two GTOs can be written as a linear combination of GTOs, integrals with Gaussian basis functions can be written in closed form, which leads to huge computational savings (see John Pople). Dozens of Gaussian-type orbital basis sets have been published in the literature. Basis sets typically come in hierarchies of increasing size, giving a controlled way to obtain more accurate solutions, however at a higher cost. The smallest basis sets are called minimal basis sets. A minimal basis set is one in which, on each atom in the molecule, a single basis function is used for each orbital in a Hartree–Fock calculation on the free atom. For atoms such as lithium, basis functions of p type are also added to the basis functions that correspond to the 1s and 2s orbitals of the free atom, because lithium also has a 1s2p bound state. For example, each atom in the second period of the periodic system (Li – Ne) would have a basis set of five functions (two s functions and three p functions). A minimal basis set may already be exact for the gas-phase atom at the self-consistent field level of theory. In the next level, additional functions are added to describe polarization of the electron density of the atom in molecules. These are called polarization functions. For example, while the minimal basis set for hydrogen is one function approximating the 1s atomic orbital, a simple polarized basis set typically has two s- and one p-function (which consists of three basis functions: px, py and pz). This adds flexibility to the basis set, effectively allowing molecular orbitals involving the hydrogen atom to be more asymmetric about the hydrogen nucleus. This is very important for modeling chemical bonding, because the bonds are often polarized. Similarly, d-type functions can be added to a basis set with valence p orbitals, and f-functions to a basis set with d-type orbitals, and so on. Another common addition to basis sets is the addition of diffuse functions. These are extended Gaussian basis functions with a small exponent, which give flexibility to the "tail" portion of the atomic orbitals, far away from the nucleus. Diffuse basis functions are important for describing anions or dipole moments, but they can also be important for accurate modeling of intra- and inter-molecular bonding. STO hierarchy The most common minimal basis set is STO-nG, where n is an integer. The STO-nG basis sets are derived from a minimal Slater-type orbital basis set, with n representing the number of Gaussian primitive functions used to represent each Slater-type orbital. Minimal basis sets typically give rough results that are insufficient for research-quality publication, but are much cheaper than their larger counterparts. Commonly used minimal basis sets of this type are: STO-3G STO-4G STO-6G STO-3G* – Polarized version of STO-3G There are several other minimum basis sets that have been used such as the MidiX basis sets. Split-valence basis sets During most molecular bonding, it is the valence electrons which principally take part in the bonding. In recognition of this fact, it is common to represent valence orbitals by more than one basis function (each of which can in turn be composed of a fixed linear combination of primitive Gaussian functions). Basis sets in which there are multiple basis functions corresponding to each valence atomic orbital are called valence double, triple, quadruple-zeta, and so on, basis sets (zeta, ζ, was commonly used to represent the exponent of an STO basis function). Since the different orbitals of the split have different spatial extents, the combination allows the electron density to adjust its spatial extent appropriate to the particular molecular environment. In contrast, minimal basis sets lack the flexibility to adjust to different molecular environments. Pople basis sets The notation for the split-valence basis sets arising from the group of John Pople is typically X-YZg. In this case, X represents the number of primitive Gaussians comprising each core atomic orbital basis function. The Y and Z indicate that the valence orbitals are composed of two basis functions each, the first one composed of a linear combination of Y primitive Gaussian functions, the other composed of a linear combination of Z primitive Gaussian functions. In this case, the presence of two numbers after the hyphens implies that this basis set is a split-valence double-zeta basis set. Split-valence triple- and quadruple-zeta basis sets are also used, denoted as X-YZWg, X-YZWVg, etc. Polarization functions are denoted by two different notations. The original Pople notation added "*" to indicate that all "heavy" atoms (everything but H and He) have a small set of polarization functions added to the basis (in the case of carbon, a set of 3d orbital functions). The "**" notation indicates that all "light" atoms also receive polarization functions (this adds a set of 2p orbitals to the basis for each hydrogen atom). Eventually it became desirable to add more polarization to the basis sets, and a new notation was developed in which the number and types of polarization functions are given explicitly in parentheses in the order (heavy,light) but with the principal quantum numbers of the orbitals implicit. For example, the * notation becomes (d) and the ** notation is now given as (d,p). If instead 3d and 4f functions were added to each heavy atom and 2p, 3p, 3d functions were added to each light atom, the notation would become (df,2pd). In all cases, diffuse functions are indicated by either adding a + before the letter G (diffuse functions on heavy atoms only) or ++ (diffuse functions are added to all atoms). Here is a list of commonly used split-valence basis sets of this type: 3-21G 3-21G* – Polarization functions on heavy atoms 3-21G** – Polarization functions on heavy atoms and hydrogen 3-21+G – Diffuse functions on heavy atoms 3-21++G – Diffuse functions on heavy atoms and hydrogen 3-21+G* – Polarization and diffuse functions on heavy atoms only 3-21+G** – Polarization functions on heavy atoms and hydrogen, as well as diffuse functions on heavy atoms 4-21G 4-31G 6-21G 6-31G 6-31G* 6-31+G* 6-31G(3df,3pd) – 3 sets of d functions and 1 set of f functions on heavy atoms and 3 sets of p functions and 1 set of d functions on hydrogen 6-311G 6-311G* 6-311+G* 6-311+G(2df,2p) In summary; the 6-31G* basis set (defined for the atoms H through Zn) is a split-valence double-zeta polarized basis set that adds to the 6-31G set five d-type Cartesian-Gaussian polarization functions on each of the atoms Li through Ca and ten f-type Cartesian Gaussian polarization functions on each of the atoms Sc through Zn. The Pople basis sets were originally developed for use in Hartree-Fock calculations. Since then, correlation-consistent or polarization-consistent basis sets (see below) have been developed which are usually more appropriate for correlated wave function calculations.  For Hartree–Fock or density functional theory, however, Pople basis sets are more efficient (per unit basis function) as compared to other alternatives, provided that the electronic structure program can take advantage of combined sp shells, and are still widely used for molecular structure determination of large molecules and as components of quantum chemistry composite methods. Correlation-consistent basis sets Some of the most widely used basis sets are those developed by Dunning and coworkers, since they are designed for converging post-Hartree–Fock calculations systematically to the complete basis set limit using empirical extrapolation techniques. For first- and second-row atoms, the basis sets are cc-pVNZ where N = D,T,Q,5,6,... (D = double, T = triple, etc.). The 'cc-p', stands for 'correlation-consistent polarized' and the 'V' indicates that only basis sets for the valence orbitals are of multiple-zeta quality. (Like the Pople basis sets, the core orbitals are of single-zeta quality.) They include successively larger shells of polarization (correlating) functions (d, f, g, etc.). More recently these 'correlation-consistent polarized' basis sets have become widely used and are the current state of the art for correlated or post-Hartree–Fock calculations. The aug- prefix is added if diffuse functions are included in the basis. Examples of these are: cc-pVDZ – Double-zeta cc-pVTZ – Triple-zeta cc-pVQZ – Quadruple-zeta cc-pV5Z – Quintuple-zeta, etc. aug-cc-pVDZ, etc. – Augmented versions of the preceding basis sets with added diffuse functions. cc-pCVDZ – Double-zeta with core correlation For period-3 atoms (Al–Ar), additional functions have turned out to be necessary; these are the cc-pV(N+d)Z basis sets. Even larger atoms may employ pseudopotential basis sets, cc-pVNZ-PP, or relativistic-contracted Douglas-Kroll basis sets, cc-pVNZ-DK. While the usual Dunning basis sets are for valence-only calculations, the sets can be augmented with further functions that describe core electron correlation. These core-valence sets (cc-pCVXZ) can be used to approach the exact solution to the all-electron problem, and they are necessary for accurate geometric and nuclear property calculations. Weighted core-valence sets (cc-pwCVXZ) have also been recently suggested. The weighted sets aim to capture core-valence correlation, while neglecting most of core-core correlation, in order to yield accurate geometries with smaller cost than the cc-pCVXZ sets. Diffuse functions can also be added for describing anions and long-range interactions such as Van der Waals forces, or to perform electronic excited-state calculations, electric field property calculations. A recipe for constructing additional augmented functions exists; as many as five augmented functions have been used in second hyperpolarizability calculations in the literature. Because of the rigorous construction of these basis sets, extrapolation can be done for almost any energetic property. However, care must be taken when extrapolating energy differences as the individual energy components converge at different rates: the Hartree–Fock energy converges exponentially, whereas the correlation energy converges only polynomially. To understand how to get the number of functions, consider the cc-pVDZ basis set for H: There are two s (L = 0) orbitals and one p (L = 1) orbital that has 3 components along the z-axis (mL = −1,0,1) corresponding to px, py and pz. Thus, there are five spatial orbitals in total. Note that each orbital can hold two electrons of opposite spin. As another example, Ar [1s, 2s, 2p, 3s, 3p] has 3 s orbitals (L = 0) and 2 sets of p orbitals (L = 1). Using cc-pVDZ, orbitals are [1s, 2s, 2p, 3s, 3s, 3p, 3p, 3d'] (where ' represents the added in polarisation orbitals), with 4 s orbitals (4 basis functions), 3 sets of p orbitals (3 × 3 = 9 basis functions), and 1 set of d orbitals (5 basis functions). Adding up the basis functions gives a total of 18 functions for Ar with the cc-pVDZ basis-set. Polarization-consistent basis sets Density-functional theory has recently become widely used in computational chemistry. However, the correlation-consistent basis sets described above are suboptimal for density-functional theory, because the correlation-consistent sets have been designed for post-Hartree–Fock, while density-functional theory exhibits much more rapid basis set convergence than wave function methods. Adopting a similar methodology to the correlation-consistent series, Frank Jensen introduced polarization-consistent (pc-n) basis sets as a way to quickly converge density functional theory calculations to the complete basis set limit. Like the Dunning sets, the pc-n sets can be combined with basis set extrapolation techniques to obtain CBS values. The pc-n sets can be augmented with diffuse functions to obtain augpc-n sets. Karlsruhe basis sets Some of the various valence adaptations of Karlsruhe basis sets are briefly described below. def2-SV(P) – Split valence with polarization functions on heavy atoms (not hydrogen) def2-SVP – Split valence polarization def2-SVPD – Split valence polarization with diffuse functions def2-TZVP – Valence triple-zeta polarization def2-TZVPD – Valence triple-zeta polarization with diffuse functions def2-TZVPP – Valence triple-zeta with two sets of polarization functions def2-TZVPPD – Valence triple-zeta with two sets of polarization functions and a set of diffuse functions def2-QZVP – Valence quadruple-zeta polarization def2-QZVPD – Valence quadruple-zeta polarization with diffuse functions def2-QZVPP – Valence quadruple-zeta with two sets of polarization functions def2-QZVPPD – Valence quadruple-zeta with two sets of polarization functions and a set of diffuse functions Completeness-optimized basis sets Gaussian-type orbital basis sets are typically optimized to reproduce the lowest possible energy for the systems used to train the basis set. However, the convergence of the energy does not imply convergence of other properties, such as nuclear magnetic shieldings, the dipole moment, or the electron momentum density, which probe different aspects of the electronic wave function. Manninen and Vaara have proposed completeness-optimized basis sets, where the exponents are obtained by maximization of the one-electron completeness profile instead of minimization of the energy. Completeness-optimized basis sets are a way to easily approach the complete basis set limit of any property at any level of theory, and the procedure is simple to automatize. Completeness-optimized basis sets are tailored to a specific property. This way, the flexibility of the basis set can be focused on the computational demands of the chosen property, typically yielding much faster convergence to the complete basis set limit than is achievable with energy-optimized basis sets. Even-tempered basis sets In 1974 Bardo and Ruedenberg proposed a simple scheme to generate the exponents of a basis set that spans the Hilbert space evenly by following a geometric progression of the form: for each angular momentum , where is the number of primitives functions. Here, only the two parameters and must be optimized, significantly reducing the dimension of the search space or even avoiding the exponent optimization problem. In order to properly describe electronic delocalized states, a previously optimized standard basis set can be complemented with additional delocalized Gaussian functions with small exponent values, generated by the even-tempered scheme. This approach has also been employed to generate basis sets for other types of quantum particles rather than electrons, like quantum nuclei, negative muons or positrons. Plane-wave basis sets In addition to localized basis sets, plane-wave basis sets can also be used in quantum-chemical simulations. Typically, the choice of the plane wave basis set is based on a cutoff energy. The plane waves in the simulation cell that fit below the energy criterion are then included in the calculation. These basis sets are popular in calculations involving three-dimensional periodic boundary conditions. The main advantage of a plane-wave basis is that it is guaranteed to converge in a smooth, monotonic manner to the target wavefunction. In contrast, when localized basis sets are used, monotonic convergence to the basis set limit may be difficult due to problems with over-completeness: in a large basis set, functions on different atoms start to look alike, and many eigenvalues of the overlap matrix approach zero. In addition, certain integrals and operations are much easier to program and carry out with plane-wave basis functions than with their localized counterparts. For example, the kinetic energy operator is diagonal in the reciprocal space. Integrals over real-space operators can be efficiently carried out using fast Fourier transforms. The properties of the Fourier Transform allow a vector representing the gradient of the total energy with respect to the plane-wave coefficients to be calculated with a computational effort that scales as NPW*ln(NPW) where NPW is the number of plane-waves. When this property is combined with separable pseudopotentials of the Kleinman-Bylander type and pre-conditioned conjugate gradient solution techniques, the dynamic simulation of periodic problems containing hundreds of atoms becomes possible. In practice, plane-wave basis sets are often used in combination with an 'effective core potential' or pseudopotential, so that the plane waves are only used to describe the valence charge density. This is because core electrons tend to be concentrated very close to the atomic nuclei, resulting in large wavefunction and density gradients near the nuclei which are not easily described by a plane-wave basis set unless a very high energy cutoff, and therefore small wavelength, is used. This combined method of a plane-wave basis set with a core pseudopotential is often abbreviated as a PSPW calculation. Furthermore, as all functions in the basis are mutually orthogonal and are not associated with any particular atom, plane-wave basis sets do not exhibit basis-set superposition error. However, the plane-wave basis set is dependent on the size of the simulation cell, complicating cell size optimization. Due to the assumption of periodic boundary conditions, plane-wave basis sets are less well suited to gas-phase calculations than localized basis sets. Large regions of vacuum need to be added on all sides of the gas-phase molecule in order to avoid interactions with the molecule and its periodic copies. However, the plane waves use a similar accuracy to describe the vacuum region as the region where the molecule is, meaning that obtaining the truly noninteracting limit may be computationally costly. Linearized augmented-plane-wave basis sets A combination of some of the properties of localized basis sets and plane-wave approaches is achieved by linearized augmented-plane-wave (LAPW) basis sets. These are based on a partitioning of space into nonoverlapping spheres around each atom and an interstitial region in between the spheres. An LAPW basis function is a plane wave in the interstitial region, which is augmented by numerical atomic functions in each sphere. The numerical atomic functions hereby provide a linearized representation of wave functions for arbitrary energies around automatically determined energy parameters. Similarly to plane-wave basis sets an LAPW basis set is mainly determined by a cutoff parameter for the plane-wave representation in the interstitial region. In the spheres the variational degrees of freedom can be extended by adding local orbitals to the basis set. This allows representations of wavefunctions beyond the linearized description. The plane waves in the interstitial region imply three-dimensional periodic boundary conditions, though it is possible to introduce additional augmentation regions to reduce this to one or two dimensions, e.g., for the description of chain-like structures or thin films. The atomic-like representation in the spheres allows to treat each atom with its potential singularity at the nucleus and to not rely on a pseudopotential approximation. The disadvantage of LAPW basis sets is its complex definition, which comes with many parameters that have to be controlled either by the user or an automatic recipe. Another consequence of the form of the basis set are complex mathematical expressions, e.g., for the calculation of a Hamiltonian matrix or atomic forces. Real-space basis sets Real-space approaches offer powerful methods to solve electronic structure problems thanks to their controllable accuracy. Real-space basis sets can be thought to arise from the theory of interpolation, as the central idea is to represent the (unknown) orbitals in terms of some set of interpolation functions. Various methods have been proposed for constructing the solution in real space, including finite elements, basis splines, Lagrange sinc-functions, and wavelets. Finite difference algorithms are also often included in this category, even though precisely speaking, they do not form a proper basis set and are not variational unlike e.g. finite element methods. A common feature of all real-space methods is that the accuracy of the numerical basis set is improvable, so that the complete basis set limit can be reached in a systematical manner. Moreover, in the case of wavelets and finite elements, it is easy to use different levels of accuracy in different parts of the system, so that more points are used close to the nuclei where the wave function undergoes rapid changes and where most of the total energies lie, whereas a coarser representation is sufficient far away from nuclei; this feature is extremely important as it can be used to make all-electron calculations tractable. For example, in finite element methods (FEMs), the wave function is represented as a linear combination of a set of piecewise polynomials. Lagrange interpolating polynomials (LIPs) are a commonly-used basis for FEM calculations. The local interpolation error in LIP basis of order is of the form . The complete basis set can thereby be reached either by going to smaller and smaller elements (i.e. dividing space in smaller and smaller subdivisions; -adaptive FEM), by switching to the use of higher and higher order polynomials (-adaptive FEM), or by a combination of both strategies (-adaptive FEM). The use of high-order LIPs has been shown to be highly beneficial for accuracy. See also Basis set superposition error Angular momentum Atomic orbitals Molecular orbitals List of quantum chemistry and solid state physics software References All the many basis sets discussed here along with others are discussed in the references below which themselves give references to the original journal articles: https://web.archive.org/web/20070830043639/http://www.chem.swin.edu.au/modules/mod8/basis1.html External links EMSL Basis Set Exchange TURBOMOLE basis set library CRYSTAL – Basis Sets Library Dyall Basis Sets Library Peterson Group Correlation Consistent Basis Sets Sapporo Segmented Gaussian Basis Sets Library Stuttgart/Cologne energy-consistent (ab initio) pseudopotentials Library ChemViz – Basis Sets Lab Activity Quantum chemistry Computational chemistry Theoretical chemistry pl:Baza funkcyjna
Basis set (chemistry)
[ "Physics", "Chemistry" ]
5,461
[ "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", "Computational chemistry", " molecular", "nan", "Atomic", " and optical physics" ]
2,122,657
https://en.wikipedia.org/wiki/Bhabha%20Atomic%20Research%20Centre
The Bhabha Atomic Research Centre (BARC) is India's premier nuclear research facility, headquartered in Trombay, Mumbai, Maharashtra, India. It was founded by Homi Jehangir Bhabha as the Atomic Energy Establishment, Trombay (AEET) in January 1954 as a multidisciplinary research program essential for India's nuclear program. It operates under the Department of Atomic Energy (DAE), which is directly overseen by the Prime Minister of India. BARC is a multi-disciplinary research centre with extensive infrastructure for advanced research and development covering the entire spectrum of nuclear science, chemical engineering, material sciences and metallurgy, electronic instrumentation, biology and medicine, supercomputing, high-energy physics and plasma physics and associated research for Indian nuclear programme and related areas. BARC's core mandate is to sustain peaceful applications of nuclear energy. It manages all facets of nuclear power generation, from the theoretical design of reactors to, computer modeling and simulation, risk analysis, development and testing of new reactor fuel, materials, etc. It also researches spent fuel processing and safe disposal of nuclear waste. Its other research focus areas are applications for isotopes in industries, radiation technologies and their application to health, food and medicine, agriculture and environment, accelerator and laser technology, electronics, instrumentation and reactor control and material science, environment and radiation monitoring etc. BARC operates a number of research reactors across the country. Its primary facilities are located in Trombay, with new facilities also located in Challakere in Chitradurga district of Karnataka. A new Special Mineral Enrichment Facility which focuses on enrichment of uranium fuel is under construction in Atchutapuram near Visakhapatnam in Andhra Pradesh, for supporting India's nuclear submarine program and produce high specific activity radioisotopes for extensive research. History When Homi Jehangir Bhabha was working at the Indian Institute of Science, there was no institute in India which had the necessary facilities for original work in nuclear physics, cosmic rays, high energy physics, and other frontiers of knowledge in physics. This prompted him to send a proposal in March 1944 to the Sir Dorabji Tata Trust for establishing "a vigorous school of research in fundamental physics". When Bhabha realised that technology development for the atomic energy programme could no longer be carried out within TIFR he proposed to the government to build a new laboratory entirely devoted to this purpose. For this purpose, 1200 acres of land was acquired at Trombay from the Bombay Government. Thus the Atomic Energy Establishment Trombay (AEET) started functioning in 1954. The same year the Department of Atomic Energy (DAE) was also established. Bhabha established the BARC Training School to cater to the manpower needs of the expanding atomic energy research and development program. Bhabha emphasized self-reliance in all fields of nuclear science and engineering. The Government of India created the Atomic Energy Establishment, Trombay (AEET) with Bhabha as the founding director on 3 January 1954. It was established to consolidate all the research and development activities for nuclear reactors and technology under the Atomic Energy Commission. All scientists and engineers engaged in the fields of reactor designing and development, instrumentation, metallurgy, and material science, etc., were transferred with their respective programs from the Tata Institute of Fundamental Research (TIFR) to AEET, with TIFR retaining its original focus for fundamental research in the sciences. After Bhabha's death in 1966, the centre was renamed as the Bhabha Atomic Research Centre on 22 January 1967. The first reactors at BARC and its affiliated power generation centres were imported from the west. India's first power reactors, installed at the Tarapur Atomic Power Station were from the United States. The primary importance of BARC is as a research centre. The BARC and the Indian government has consistently maintained that the reactors are used for this purpose only: Apsara (1956; named by the then Prime Minister of India, Jawaharlal Nehru when he likened the blue Cerenkov radiation to the beauty of the Apsaras), CIRUS (1960; the "Canada-India Reactor" with assistance from the US), the now-defunct ZERLINA (1961; Zero Energy Reactor for Lattice Investigations and Neutron Assay), Purnima I (1972), Purnima II (1984), Dhruva (1985), Purnima III (1990), and KAMINI. Apsara was India's first nuclear reactor built at BARC in 1956 to conduct basic research in nuclear physics. It is 1 MWTh light water cooled and moderated swimming pool type thermal reactor that went critical on August 4, 1956, and is suitable for production of isotopes, basic nuclear research, shielding experiments, neutron activation analysis, neutron radiography and testing of neutron detectors. It was shut down permanently in 2010 and replaced with Apsara-U. Purnima-I is a plutonium oxide fuelled 1 MWTh pulsed-fast reactor that was built starting in 1970 and went critical on 18 May 1972 to primarily support the validation of design parameters for development of plutonium-239 powered nuclear weapons. On the twentieth anniversary of the 1974 Pokhran nuclear test, Purnima's designer, P. K. Iyengar, reflected on the reactor's critical role: "Purnima was a novel device, built with about 20 kg of plutonium, a variable geometry of reflectors, and a unique control system. This gave considerable experience and helped to benchmark calculations regarding the behaviour of a chain-reacting system made out of plutonium. The kinetic behaviour of the system just above critical could be well studied. Very clever physicists could then calculate the time behaviour of the core of a bomb on isotropic compression. What the critical parameters would be, how to achieve optimum explosive power, and its dependence on the first self sustaining neutron trigger, were all investigated". It was decommissioned in 1973. Along with DRDO and other agencies and laboratories BARC also played an essential and important role in nuclear weapons technology and research. The plutonium used in India's 1974 Smiling Buddha nuclear test came from CIRUS. In 1974 the head of this entire nuclear bomb project was the director of the BARC, Raja Ramanna. The neutron initiator was of the polonium–beryllium type and code-named Flower was developed by BARC. The entire nuclear bomb was engineered and finally assembled by Indian engineers at Trombay before transportation to the test site. The 1974 test (and the 1998 tests that followed) gave Indian scientists the technological know-how and confidence not only to develop nuclear fuel for future reactors to be used in power generation and research but also the capacity to refine the same fuel into weapons-grade fuel to be used in the development of nuclear weapons. BARC was also involved in the Pokhran-II series of five nuclear test conducted at Pokhran Test Range in May 1998. It was the second instance of nuclear testing conducted after Smiling Buddha by India. The tests achieved their main objective of giving India the capability to build fission and thermonuclear weapons(Hydrogen bomb/fusion bomb) with yields up to 200 Kilotons. The then Chairman of the Indian Atomic Energy Commission described each one of the explosions of Pokhran-II to be "equivalent to several tests carried out by other nuclear weapon states over decades". Subsequently, India established computer simulation capability to predict the yields of nuclear explosives whose designs are related to the designs of explosives used in this test. The scientists and engineers of the BARC, the Atomic Minerals Directorate for Exploration and Research (AMDER), and the Defence Research and Development Organisation (DRDO) were involved in the nuclear weapon assembly, layout, detonation and data collection. On 3 June 1998 BARC was hacked by hacktivist group milw0rm, consisting of hackers from the United States, United Kingdom and New Zealand. They downloaded classified information, defaced the website and deleted data from servers. BARC also designed a class of Indian Pressurized Heavy Water Reactor IPHWR (Indian Pressurized Heavy Water Reactor), the baseline 220 MWe design was developed from the Canadian CANDU reactor. The design was later expanded into 540 MW and 700 MW designs. The IPHWR-220 (Indian Pressurized Heavy Water Reactor-220) was the first in class series of Indian pressurized heavy-water reactor designed by the Bhabha Atomic Research Centre. It is a Generation II reactor developed from earlier CANDU based RAPS-1 and RAPS-2 reactors built at Rawatbhata, Rajasthan. Currently there are 14 units operational at various locations in India. Upon completion of the design of IPHWR-220, a larger 540 MWe design was started around 1984 under the aegis of BARC in partnership with NPCIL. Two reactors of this design were built in Tarapur, Maharashtra starting in the year 2000 and the first was commissioned on 12 September 2005. The IPHWR-540 design was later upgraded to a 700 MWe with the main objective to improve fuel efficiency and develop a standardized design to be installed at many locations across India as a fleet-mode effort. The design was also upgraded to incorporate Generation III+ features. Almost 100% of the parts of these indigenously designed reactors are manufactured by Indian industry. BARC designed and built India's first pressurised water reactor at Kalpakkam, a 80MW land based prototype of INS Arihant's nuclear power unit, as well as the Arihant's main propulsion reactor. Three other submarine vessels of the class(Arihant class) including the upcoming INS arighat, S4 and S4* will also get the same class of reactors as there primary propulsion. BARC also developed stabilization systems for Seekers, Antenna Units for India's multirole fighter HAL Tejas and contributed to Chandrayaan-I and Mangalyaan missions. BARC has contributed for collaboration with various mega science projects of National and International repute viz. CERN (LHC), India-based Neutrino Observatory (INO), ITER, Low Energy High Intensity Proton Accelerator (LEHIPA), Facility for Antiproton and Ion Research (FAIR), Major Atmospheric Cerenkov Experiment Telescope (MACE), etc. In 2012 it was reported that new facilities and campuses of BARC were planned in Atchutapuram, near Visakhapatnam in Andhra Pradesh, and in Challakere in Chitradurga district in Karnataka. BARC would be setting 30 MW special research reactor using an enriched uranium fuel at Visakhapatnam to meet the demand for high specific activity radio isotopes and carry out extensive research and development in nuclear sector. The site would also support the nuclear submarine program. Description BARC is a multi-disciplinary research centre with extensive infrastructure for advanced research and development covering the entire spectrum of nuclear science, chemical engineering, material sciences and metallurgy, electronic instrumentation, biology and medicine, supercomputing, high-energy physics and plasma physics and associated research for Indian nuclear programme and related areas. BARC is a premier nuclear and multi-disciplinary research organisation though founded primarily to serve India's nuclear program and its peaceful applications of nuclear energy does an extensive and advanced research and development covering the entire spectrum of nuclear science, chemical engineering, Radiology and their application to health, food, medicine, agriculture and environment, accelerator and Laser Technology, electronics, High Performance Computing, instrumentation and reactor control, Materials Science and radiation monitoring, high-energy physics and plasma physics among others. Organisation and governance BARC is an agency of the Department of Atomic Energy. It is divided into a number of Groups, each under a director, and many more Divisions. Nuclear Recycle Board BARC's Nuclear Recycle Board (NRB) was formed in 2009. It is located in three cities – Mumbai, Tarapur, and Kalpakkam. Areas of research BARC conducts extensive and advanced research and development covering the entire spectrum of nuclear science, chemical engineering, material sciences and metallurgy, electronics instrumentation, biology and medicine, advance computing, high-energy plasma physics and associated research for Indian nuclear program and related areas. The few are: Thorium fuel cycle India has a unique position in the world, in terms of availability of nuclear fuel resource. It has a limited resource of uranium but a large resource of thorium. The beach sands of Kerala and Orissa have rich reserves of monazite, which contains about 8–10% thorium. Studies have been carried out on all aspects of thorium fuel cycle - mining and extraction, fuel fabrication, utilisation in different reactor systems, evaluation of its various properties and irradiation behaviour, reprocessing and recycling. Some of the important milestones achieved / technological progress made in these are as follows: The process of producing thoria from monazite is well established. IREL has produced several tonnes of nuclear grade thoria powder The fabrication of thoria based fuel by powder-pellet method is well established. Few tonnes of thoria fuel have been fabricated at BARC and NFC for various irradiations in research and power reactors. Studies have been carried out regarding use of thorium in different types of reactors with respect to fuel management, reactor control and fuel utilisation. A Critical Facility has been constructed and is being used for carrying out experiments with thoria based fuels. Thoria based fuel irradiations have been carried out in our research and power reactors. Thoria fuel rods in the reflector region of research reactor CIRUS. Thoria fuel assemblies as reactivity load in research reactor Dhruva. Thoria fuel bundles for flux flattening in the Initial Core of PHWRs. Thoria blanket assemblies in FBTR. (Th-Pu)MOX fuel pins of BWR, PHWR and AHWR design in research reactors CIRUS and Dhruva. Post-irradiation examinations have been carried out on the irradiated PHWR thoria fuel bundles and (Th-Pu) MOX fuel pins. Thermo-physical and thermodynamic properties have been evaluated for the thoria based fuels. Thoria fuel rods irradiated in CIRUS have been reprocessed at Uranium Thorium Separation Facility (UTSF) BARC. The recovered 233U has been fabricated as fuel for KAMINI reactor. Thoria blanket assemblies irradiated in FBTR have been reprocessed at IGCAR. The recovered 233U has been used for experimental irradiation of PFBR type fuel assembly in FBTR. Thoria fuel bundles irradiated in PHWRs will be reprocessed in Power Reactor Thorium Reprocessing Facility (PRTRF). The recovered 233U will be used for reactor physics experiments in AHWR-Critical Facility. Advanced reactors AHWR and AHWR300-LEU have been designed at BARC to provide impetus to the large-scale utilisation of thorium. Reprocessing and nuclear waste management After certain energy utilization, known as burn-up (a legacy of thermal power) is reached, nuclear fuel in a reactor is replaced by fresh fuel so that fission chain reactions can sustain and desired power output can be maintained. The spent fuel discharged from the reactor is known as spent nuclear fuel (SNF). BARC has come a long way since it first began reprocessing of spent fuel in the year 1964 at Trombay. India has more than five decades of experience for reprocessing of spent fuel of Uranium based first stage reactor resulting in development of well matured and highly evolved PUREX based reprocessing flow sheet involving recovery of SNM. Implementation of thorium fuel cycle requires extraction of 233U from irradiated thorium fuel and its re-insertion into the fuel cycle. Based on indigenous efforts, a flow sheet for reprocessing of spent thoria rods was developed and demonstrated at Uranium Thorium Separation Facility (UTSF), Trombay. After gaining successful experience at UTSF, Power Reactor Thoria Reprocessing Facility (PRTRF) has been set up employing advanced laser based technology for dismantling of thoria bundle and single pin mechanical chopper for cutting of fuel pins. Thoria irradiated fuel bundles from PHWR were reprocessed using TBP as extractant to recover 233U. High Level Liquid Waste (HLLW) generated during reprocessing of spent fuel contains most of the radioactivity generated in entire nuclear fuel cycle. The HLLW is immobilised into an inert Sodium Boro-Silicate glass matrix through a process, called vitrification. The vitrified waste is stored for an interim period in an air cooled vault to facilitate the dissipation of heat generated during radioactive decay. Prior to its eventual disposal in geological disposal facility. Vitrification of HLLW is a complex process and poses challenges in view of high temperature operations in presence of high amount of radioactivity. As a result, very few countries in world could master the technology of vitrification of HLLW and India is among them. Three melter technologies, Induction Heated Metallic Melter (IHMM), Joule Heated Ceramic Melter (JHCM) and Cold Crucible Induction Melter (CCIM), have been indigenously developed for vitrification of HLLW. HLLW vitrification plants, based on IHMM or JHCM technologies, have been constructed and successfully operated at Trombay, Tarapur and Kalpakkam sites of India. Vitrification Cell (IHMM), WIP, Trombay Joule Heated Ceramic Melter, Tarapur Inside view of Cold Crucible Induction Melter R&D in the field of partitioning of Minor Actinides from HLLW are also aimed to separate out the long-lived radioactive waste constituents prior to immobilizing then in glass matrice. The long lived radio-contaminants is planned to be burnt in Fast reactor or Accelerator Driven Sub Critical systems to get converted into short- lived species. This will reduce the need of long term isolation of radionuclide from environment by multifold. R&D is also directed towards management of Hulls, contaminated left over pieces of zirconium clad tube after dissolution of fuel, and Geological Disposal Facility for safe disposal of vitrified HLLW and long lived waste with objective to long term isolation of radionuclide from the human environment. Advanced Fuel Fabrication Facility The Advanced Fuel Fabrication Facility (AFFF), a MOX fuel fabrication facility, is part of the Nuclear Recycle Board (NRB), and located at the Tarapur, Maharashtra. Advanced Fuel Fabrication Facility has fabricated MOX fuels on experimental basis for BWR, PHWR, FBTR and research reactors. It makes plutonium-based MOX fuel for the stage 2 of Indian Nuclear Program. The unit has successfully fabricated more than 1 lakh PFBR fuel elements for the Kalpakam based Bhavini's PFBR. AFFF is presently engaged in the fabrication of PFBR fuel elements for reloads of PFBR. AFFF also is involved in AHWR(Thorium MOX Fuel) MOX fuel fabrication for the third stage of Indian nuclear program and is experimenting with different fabrication techniques. | MOX fuel fabrication at AFFF follows Powder Oxide Pelletisation (POP) Method. Major operations are mixing and milling, pre-compaction, granulation, Final compaction, Sintering, centreless grinding, degassing, endplug welding, decontamination of fuel elements and wire wrapping. AFFF also does the recycling of the rejects based on either thermal pulverisation or microwave based oxidation and reduction. AFFF uses Laser welding for encapsulation of fuel elements along with GTAW. Basic and applied physics The interdisciplinary research includes investigation of matter under different physicochemical environments, including temperature, magnetic field and pressure. Reactors, ion and electron accelerators and lasers are being employed as tools to investigate crucial phenomena in materials over wide length and time scales. Major facilities, operated by BARC for research in Physical sciences, include the Pelletron-Superconducting linear accelerator at TIFR, the National Facility for Neutron Beam Research (NFNBR) at Dhruva, a number of state-of-the-art beam lines at INDUS synchrotron, RRCAT-Indore, the TeV Atmospheric Cherenkov Telescope with Imaging Camera (TACTIC) at Mt. Abu, the Folded Tandem Ion Accelerator (FOTIA) and PURNIMA fast neutron facilities at BARC, the 3 MV Tandetron accelerator at the National Centre for Compositional Characterization of Materials (NCCCM) at Hyderabad, the 10 MeV electron accelerator at the Electron Beam Centre at Navi Mumbai. BARC also has sustained programs of indigenous development of detectors, sensors, mass spectrometer, imaging technique and multilayer-mirrors. Recent achievements include: commissioning of the Major Atmospheric Cerenkov Experiment Telescope (MACE) at Ladakh, a time-of-flight neutron spectrometer at Dhruva, the beam-lines at INDUS (Small-and wide angle X-ray Scattering (SWAXS), protein crystallography, Infrared spectroscopy, Extended X-ray absorption fine structure (EXAFS), Photoelectron spectroscopy (PES/ PEEM), Energy and angle-dispersive XRD, and imaging), commissioning of beam-lines and associated detector facilities at BARC-TIFR Pelletron facility, the Low Energy High Intensity Proton Accelerator (LEHIPA) at BARC, the Digital holographic microscopy for biological cell imaging at Vizag. The Low Energy High Intensity Proton Accelerator (LEHIPA) project is under installation at common facility building in BARC premises. The 20 MeV, 30 mA, CW proton linac will consist of a 50 keV ion source, a 3 MeV, 4 m long, radio-frequency quadrupole (RFQ) and a 3-20 MeV, 12 m long, drift-tube linac (DTL) and a beam dump. Major Atmospheric Cerenkov Experiment Telescope (MACE) is an Imaging Atmospheric Cerenkov telescope (IACT) located near Hanle, Ladakh, India. It is the highest (in altitude) and second largest Cerenkov telescope in the world. It was built by Electronics Corporation of India, Hyderabad, for the Bhabha Atomic Research Centre and was assembled at the campus of Indian Astronomical Observatory at Hanle. The telescope is the second-largest gamma ray telescope in the world and will help the scientific community enhance its understanding in the fields of astrophysics, fundamental physics, and particle acceleration mechanisms. The largest telescope of the same class is the 28-metre-diameter High Energy Stereoscopic System (HESS) telescope being operated in Namibia. Ongoing basic and applied research encompasses a broad spectrum covering condensed matter physics, nuclear physics, astrophysical sciences and atomic and molecular spectroscopy. Important research areas include advanced magnetism, soft and nano structured materials, energy materials, thin film and multi-layers, accelerator/reactor based fusion-fission studies, nuclear-astrophysics, nuclear data management, reactor based neutrino physics, very high-energy astrophysics and astro-particle physics. Some of the important ongoing developmental activities are: Indian Scintillat or Matrix for Reactor Anti-Neutrinos (ISMRAN), neutron guides, polarizers and Neutron supermirror, Nb-based superconducting RF cavities, high purity Germanium detector, 2-D neutron detectors, cryogen-free superconducting magnets, electromagnetic separator for radio-isotopes, nuclear batteries and radioisotope thermoelectric generators (RTG) power source and liquid Hydrogen cold neutron source. Other activities include research and developmental towards India-based Neutrino Observatory (INO) and quantum computing. High-performance computing BARC designed and developed a series of supercomputers for their internal usage. They were mainly used for molecular dynamical simulations, reactor physics, theoretical physics, computational chemistry, computational fluid dynamics, and finite element analysis. The latest in the series is Anupam-Aganya. BARC has started development of supercomputers under the ANUPAM project in 1991 and till date, has developed more than 20 different computer systems. All ANUPAM systems have employed parallel processing as the underlying philosophy and MIMD (Multiple Instruction Multiple Data) as the core architecture. BARC, being a multidisciplinary research organization, has a large pool of scientists and engineers, working in various aspects of nuclear science and technology and thus are involved in doing diverse nature of computation. To keep the gestation period short, the parallel computers were built with commercially available off-the-shelf components, with BARC's major contribution being in the areas of system integration, system engineering, system software development, application software development, fine tuning of the system and support to a diverse set of users. The series started with a small four-processor system in 1991 with a sustained performance of 34 MFlops. Keeping in mind the ever increasing demands from the users, new systems have been built regularly with increasing computational power. The latest in the series of supercomputers is Anupam-Aganya with processing power of 270 TFLOPS and PARALLEL PROCESSING SUPERCOMPUTER ANUPAM-ATULYA:Provides sustained LINPACK performance of 1.35 PetaFlops for solving complex scientific problems. Electronics instrumentation and computers BARC's research and development programing electrical, electronics, instrumentation and computers is in the fields of Nuclear Science and Technology, and this has resulted in the development of various indigenous technologies. In the fields of nuclear energy, many Control and Instrumentation systems including In Service Inspection Systems were designed, developed and deployed for Nuclear Reactors ranging from PHWR, AHWR, LWR, PFBR, to new generation Research Reactors and C&I for reprocessing facilities. Development of simulators for Nuclear Power Plant are immense as they provide the best training facilities for the reactor personal and also for licensing of reactor operators. Core competencies cover a wide spectrum and include Process Sensors, Radiation Detector, Nuclear Instruments, Microelectronics, MEMS, Embedded Real Time Systems, Modelling and Simulation, Computer Network, High Integrity Software Engineering, High performance DAQ systems, High Voltage Supplies, Digital Signal Processing, Image Processing, Deep Learning, Motion control, Security Electronics, Medical Electronics etc. Development of stabilization systems for Seekers, Antenna Platform Unit for LCA HAL Tejas multi-mode Radar, Servo system for Indian Deep Space Network IDSN32- 32 meter antenna which tracked Chandrayaan-I and Mangalyaan, Instrumented PIG for Oil Pipe line inspection, Servo control and camera electronics for MACE telescope, Radiometry and Radiation Monitoring Systems etc. Various technology spin-offs include products developed for industrial, medical, transportation, security, aero-space and defense applications. Generic electronic products like Qualified Programmable Logic Controller platform (TPLC-32), suitable for deployment in safety critical applications, Reactivity meters, Machinery Protection systems, Security Gadgets for Physical Protection, Access Control Systems, Perimeter Intrusion Detection Systems, CCTV and Video surveillance Systems, Scanning Electron Microscope, VHF Communication Systems have been developed as part of the indigenization process. Material Sciences and Engineering Materials Science and Engineering plays an important role in all aspects including sustaining and providing support for Indian nuclear program and also developing advanced technologies. The minerals containing elements of interest to DAE e.g. Uranium, Rare-earth elements are taken up for developing beneficiation techniques/flow sheets to improve the metal value for its extraction. The metallic Uranium required for research reactors is produced. Improvement of process efficiency for operating uranium mills is done and inputs for implemented at plants by Uranium Corporation of India. The process flow sheet to separate individual rare earth oxide from different resources (including from secondary sources e.g. scrap/used products) are developed, demonstrated and technology is transferred to Indian Rare Earths Limited (IREL) for production at its plants. All the requirements of refractory materials for DAE applications including neutron absorber applications are being met by research, development and production in Materials Group. Materials Group works for development of flow sheets/processes for the materials required for DAE plants/applications e.g. Ti sponge, advanced alloys, coatings using various processes including pack cementation, chemical vapour, physical vapour, Electroplating/Electroless plating. Recovery of high purity Cobalt from various wastes/scrap material has also been demonstrated and technologies transferred for productionization. Research aimed at advanced materials technologies using Thermodynamics, Mechanics, Simulation and Modelling, characterisation and performance evaluation is done. Studies aimed at understanding radiation damage in materials are undertaken using advanced characterization techniques to help in alloy development and material degradation assessment activities. Generation of thermo-physical and defect property database of nuclear materials e.g., Thoria-based Mixed oxide and metallic fuels; studies on Fe-Zr alloys and natural and synthetic minerals as hosts for metallic waste immobilization through modelling and simulations is being pursued. Development of novel solvents to extract selected elements from the nuclear waste for medical applications and specific metallic values from E-waste is being done. Technologies such as Large-scale synthesis of carbon nanotube (CNT), low-carbon ferro-alloys (FeV, FeMo, FeNb, FeW, FeTi and FeC), Production of tungsten metal powder and fabrication of tungsten (W) and tungsten heavy alloy (WHA) and Production of zirconium diboride (ZrB2) powder and Fabrication of high density ZrB2 shapes etc., have been realised. Chemical Engineering and Sciences The key features underlying the development effort are self-reliance, achieving products with very high purity specifications, working with separation processes characterized by low separation factors, aiming high recoveries, optimal utilization of scarce resources, environmental benignity, high energy efficiency and stable continuous operation. Non-power application of nuclear energy has been demonstrated in the area of water desalination using the technologies such as Multi Stage Flash Distillation and Multi Effect Distillation with Thermo Vapor Compression (MED-TVC). Membrane technologies have been deployed not only for nuclear waste treatment but for society at large in line with the Jal Jeevan Mission of Government of India to provide safe drinking water at the household level. Development and demonstration of fluidized bed technology for applications in nuclear fuel cycle; synthesis and evaluation of novel extractants; synthesis of TBM materials (synthesis of lithium titanate pebbles); molecular modeling for various phenomena (such as permeation of hydrogen and its isotopes through different metals, desalination using carbon nanotubes, effect of composition of glass on properties relevant for vitrification, design of solvents and metal organic frameworks); applications of microreactors for intensification of specific processes; development of low temperature freeze desalination process; environment-friendly integrated zero liquid discharge based desalination systems; treatment of industrial effluents; new generation membranes (such as high performance graphene-based nanocomposite membranes, membranes for haemodialysis, forward osmosis and metallic membranes); hydrogen generation and storage by various processes (electrochemical water splitting, iodine-sulphur thermochemical, copper-chlorinehybrid thermochemical cycles); development of adsorptive gel materials for specific separations; heavy water upgradation; metal coatings for various applications (such as membrane permeator, neutron generator and special applications);fluidized bed chemical vapour deposition; and chemical process applications of Ultrasound Technology (UT). A pre-cooled modified Claude cycle based 50 L/hr capacity helium liquefier (LHP50) has been developed and commissioned by BARC at Trombay. Major component technologies involved in LHP50 include ultra-high speed gas bearing supported miniature turboexpanders and compact plate fin heat exchangers along with cryogenic piping and long-stem valves all housed inside the LHP50 Cold Box. Other major equipment include a coaxial helium transfer line and a liquid helium receiver vessel. Environment, Radiology and Radiochemical Science BARC also monitors Environmental impact and dose / risk assessment for radiological and chemical contaminants, Environmental surveillance and radiation protection for the entire nuclear fuel cycle facilities, Meteorological and hydro-geological investigations for DAE sites. Modelling of contaminant transport and dispersion in the atmosphere and hydrosphere, Radiological impact assessment of waste management and disposal practices, Development of Environmental Radiation Monitoring systems and Establishment of country wide radiation monitoring network, establishment of benchmarks for assessing the radiological impact of the nuclear power activities on the marine environment. The highlights of these programs are Positron and positronium chemistry, Actinide chemistry and spectroscopy, Isotope hydrology for water resource management, Radiotracer for Industrial Applications, separation and purification of new, radionuclides for medical applications, advance fuel development by sol gel method, chemical quality control of nuclear fuels, complexation and speciation of actinides, Separation method development for back end fuel cycle processes. The other major research projects are thermo-physical property evaluation of molten salt breeder reactor (MSBR) systems, development of core-catcher materials, hydrogen mitigation, catalysts for hydrogen production, hydrogen storage materials, nanotherapeutics and bio-sensors, decontamination of reactor components, biofouling control and thermal ecology studies, supramolecular chemistry, environmental and interfacial chemistry, ultrafast reaction dynamics, single molecule spectroscopy, synthesis and applications of nanomaterials, cold plasma applications, luminescent materials for bio-imaging, materials for light emitting devices and security applications etc. Health, Food and agriculture Development of new elite crop varieties including oil seeds and pulses. Using radiation-induced mutagenesis, hybridization, and tissue culture techniques 49 crop varieties have been developed, released and Gazette-notified for commercial cultivation. Development of molecular markers, transgenics, biosensors, fertilizer formulations with improved nutrient use efficiency. Understanding DNA damage repair, replication, redox biology and autophagy process and development of radio-sensitizers, chemo-sensitizers for cancer therapy. Design and synthesis of organo-fluorophores and organic electronic molecules, relevant to nuclear sciences and societal benefits (advanced technology and health). Design and synthesis of organo-fluorophores and organic electronic molecules, relevant to nuclear sciences and societal benefits (advanced technology and health). Synthesis and development of nuclear medicine ligands for diagnosis and therapy of cancer and other diseases. Asymmetric total synthesis and organocatalytic methods (green chemistry approach) for the synthesis of biologically active compounds. R&D activities in the frontier areas of radiation biology for understanding the effect of low- and high LET radiations, chronic and acute radiation exposure, high background radiation, and radionuclide exposure on mammalian cells, cancer cells, experimental rodents and human health. Preclinical and translational research is aimed at development of new drugs and therapeutics for prevention and mitigation of radiation injury, de-corporation of heavy metals and treatment of inflammatory disorders and cancers. Studying macromolecular structures and protein-ligand interactions using biophysical techniques like X-ray crystallography, neutron-scattering, circular dichroism and synchrotron radiation, with an aim for ab-initio design of therapeutic molecules. Understanding the cellular and molecular basis of stress response in bacteria, plants and animals. Understanding the extraordinary resistance to DNA damage and oxidative stress tolerance in bacteria, and epigenetic regulation of alternate splicing in plants and mammalian cells. Development of CRISPR-Cas mediated genome editing technologies in both basic and applied research and is engaged in the development of gene technologies and products for bio-medical applications. Studies on uranium sequestration by Nostoc and bacteria isolated from uranium mines. Research and development of novel radiopharmaceuticals for diagnostic and therapeutic purposes. Synthesis of substrates from suitable precursors for use in radio-labeling with diagnostic (99mTc) and therapeutic (177Lu, 153Sm, 166Ho, 186/188Re, 109Pd, 90Y, 175Yb, 170Tm) radioisotopes in the preparation of agents intended for use as radiopharmaceuticals. Custom preparation of special sources to suit the requirements of the Defense Research Organization of India (DRDO) and National Research Laboratories such as National Physics Research Laboratory, ISRO etc. India's three-stage nuclear power programme India's three-stage nuclear power programme was formulated by Homi Bhabha in the 1950s to secure the country's long term energy independence, through the use of uranium and thorium reserves found in the monazite sands of coastal regions of South India. The ultimate focus of the programme is on enabling the thorium reserves of India to be utilised in meeting the country's energy requirements. Thorium is particularly attractive for India, as it has only around 1–2% of the global uranium reserves, but one of the largest shares of global thorium reserves at about 25% of the world's known thorium reserves. Stage I – Pressurised Heavy Water Reactor In the first stage of the programme, natural uranium fueled pressurised heavy water reactors (PHWR) produce electricity while generating plutonium-239 as by-product. PHWRs was a natural choice for implementing the first stage because it had the most efficient reactor design in terms of uranium utilisation, and the existing Indian infrastructure in the 1960s allowed for quick adoption of the PHWR technology. Natural uranium contains only 0.7% of the fissile isotope uranium-235. Most of the remaining 99.3% is uranium-238 which is not fissile but can be converted in a reactor to the fissile isotope plutonium-239. Heavy water (deuterium oxide, D2O) is used as moderator and coolant. Stage II – Fast Breeder Reactor In the second stage, fast breeder reactors (FBRs) would use a mixed oxide (MOX) fuel made from plutonium-239, recovered by reprocessing spent fuel from the first stage, and natural uranium. In FBRs, plutonium-239 undergoes fission to produce energy, while the uranium-238 present in the mixed oxide fuel transmutes to additional plutonium-239. Thus, the Stage II FBRs are designed to "breed" more fuel than they consume. Once the inventory of plutonium-239 is built up thorium can be introduced as a blanket material in the reactor and transmuted to uranium-233 for use in the third stage The surplus plutonium bred in each fast reactor can be used to set up more such reactors, and might thus grow the Indian civil nuclear power capacity till the point where the third stage reactors using thorium as fuel can be brought online. The design of the country's first fast breeder, called Prototype Fast Breeder Reactor (PFBR), was done by Indira Gandhi Centre for Atomic Research (IGCAR). Doubling time Doubling time refers to the time required to extract as output, double the amount of fissile fuel, which was fed as input into the breeder reactors. This metric is critical for understanding the time durations that are unavoidable while transitioning from the second stage to the third stage of Bhabha's plan, because building up a sufficiently large fissile stock is essential to the large deployment of the third stage. Stage III – Thorium Based Reactors A Stage III reactor or an Advanced nuclear power system involves a self-sustaining series of thorium-232–uranium-233 fuelled reactors. This would be a thermal breeder reactor, which in principle can be refueled – after its initial fuel charge – using only naturally occurring thorium. According to the three-stage programme, Indian nuclear energy could grow to about 10 GW through PHWRs fueled by domestic uranium, and the growth above that would have to come from FBRs till about 50GW.[b] The third stage is to be deployed only after this capacity has been achieved. Parallel approaches As there is a long delay before direct thorium utilisation in the three-stage programme, the country is looking at reactor designs that allow more direct use of thorium in parallel with the sequential three-stage programme. Three options under consideration are the Indian Accelerator Driven Systems (IADS), Advanced Heavy Water Reactor (AHWR) and Compact High Temperature Reactor. Molten Salt Reactor is also under development. India's Department of Atomic Energy and US's Fermilab are designing unique first-of-its-kind accelerator driven systems. No country has yet built an Accelerator Driven System for power generation. Anil Kakodkar, former chairman of the Atomic Energy Commission called this a mega science project and a "necessity" for humankind. Reactor design BARC has developed a wide array of nuclear reactor designs for nuclear research, production of radioisotopes, naval propulsion and electricity generation Research reactors and production of radioisotopes Commercial reactors and power generation Pressurized heavy-water reactors BARC has developed various sizes of IPHWR class of pressurized heavy-water reactors powered by Natural Uranium for the first-stage Three-stage nuclear power programme which produce electricity and plutonium-239 to power the fast-breeder reactors being developed by IGCAR for the second stage of the program. The IPHWR class was developed from the CANDU reactors built at RAPS in Rawatbhata, Rajasthan. As of 2020, three successively larger designs IPHWR-220, IPHWR-540 and IPHWR-700 of electricity generation capacity of 220 MWe, 540 MWe and 700 MWe respectively have been developed. Advanced heavy-water reactor BARC is developing a 300 MWe advanced heavy-water reactor design that is powered by thorium-232 and uranium-233 to power the third stage of India's three-stage nuclear power programme. The AHWR at standard is set to be a closed nuclear fuel cycle. AHWR-300 is expected to have design life close to 100 years and will utilise Uranium-233 produced in the fast-breeder reactors being developed by IGCAR. Indian molten salt breeder reactor The Indian molten salt breeder reactor (IMSBR) is the platform to burn thorium as part of 3rd stage of Indian nuclear power programme. The fuel in IMSBR is in the form of a continuously circulating molten fluoride salt which flows through heat exchangers for ultimately transferring heat for power production to Super-critical based Brayton cycle (SCBC) so as to have larger energy conversion ratio as compared to existing power conversion cycle. Because of the fluid fuel, online reprocessing is possible, extracting the 233Pa (formed in conversion chain of 232Th to 233U) and allowing it to decay to 233U outside the core, thus making it possible to breed even in thermal neutron spectrum. Hence IMSBR can operate in self sustaining 233U-Th fuel cycle. Additionally, being a thermal reactor, the 233U requirement is lower (as compared to fast spectrum), thus allowing higher deployment potential. Light-water reactors BARC with experience gained from the development of the light-water reactor for the Arihant-class submarine is developing a large 900 MWe pressurized water reactor design known as IPWR-900. The design will include Generation III+ safety features like Passive Decay Heat Removal System, Emergency Core Cooling System (ECCS), Corium Retention and Core Catcher System. Marine propulsion for naval application BARC has developed multiple designs of light-water reactor designs suitable for nuclear marine propulsion for Indian Navy submarines beginning with the CLWR-B1 reactor design for the Arihant-class submarine.Total four submarine will be built for this class. India and the NPT India is not a part of the Nuclear Non-Proliferation Treaty (NPT), citing concerns that it unfairly favours the established nuclear powers, and provides no provision for complete nuclear disarmament. Indian officials argued that India's refusal to sign the treaty stemmed from its fundamentally discriminatory character; the treaty places restrictions on the non-nuclear weapons states but does little to curb the modernisation and expansion of the nuclear arsenals of the nuclear weapons states. More recently, India and the United States signed an agreement to enhance nuclear cooperation between the two countries, and for India to participate in an international consortium on fusion research, ITER (International Thermonuclear Experimental Reactor). Civilian research The BARC also researches biotechnology at the Gamma Gardens and has developed numerous disease-resistant and high-yielding crop varieties, particularly groundnuts. It also conducts research in Liquid Metal Magnetohydrodynamics for power generation. On 4 June 2005, intending to encourage research in basic sciences, BARC started the Homi Bhabha National Institute. Research institutions affiliated to BARC(Bhabha Atomic Research Centre) include IGCAR (Indira Gandhi Centre for Atomic Research), RRCAT (Raja Ramanna Centre for Advanced Technology), and VECC (Variable Energy Cyclotron Centre). Power projects that have benefited from BARC expertise but which fall under the NPCIL (Nuclear Power Corporation of India Limited) are KAPP (Kakrapar Atomic Power Project), RAPP (Rajasthan Atomic Power Project), and TAPP (Tarapur Atomic Power Project). The Bhabha Atomic Research Centre in addition to its nuclear research mandate also conducts research in other high technology areas like accelerators, micro electron beams, materials design, supercomputers, and computer vision among the few. The BARC has dedicated departments for these specialized fields. BARC has designed and developed, for its own use an infrastructure of supercomputers, Anupam using state of the art technology. See also IPHWR, class of PHWR electricity generation reactors designed by BARC AHWR, thorium fuelled reactor being designed by BARC Milw0rm#BARC attack Department of Atomic Energy, Government of India Indira Gandhi Centre for Atomic Research Raja Ramanna Centre for Advanced Technology Variable Energy Cyclotron Centre Homi Bhabha Cancer Hospital and Research Centre (disambiguation) References 1954 establishments in Bombay State Atomic Energy Commission of India Companies based in Mumbai Executive branch of the government of India Homi Bhabha National Institute Nuclear technology in India Research institutes in Mumbai Technology companies established in 1954 Research institutes established in 1954 Energy research Nuclear research institutes
Bhabha Atomic Research Centre
[ "Engineering" ]
9,684
[ "Nuclear research institutes", "Nuclear organizations" ]
2,123,049
https://en.wikipedia.org/wiki/OWASP
The Open Worldwide Application Security Project (formerly Open Web Application Security Project) (OWASP) is an online community that produces freely available articles, methodologies, documentation, tools, and technologies in the fields of IoT, system software and web application security. The OWASP provides free and open resources. It is led by a non-profit called The OWASP Foundation. The OWASP Top 10 2021 is the published result of recent research based on comprehensive data compiled from over 40 partner organizations. History Mark Curphey started OWASP on September 9, 2001. Jeff Williams served as the volunteer Chair of OWASP from late 2003 until September 2011. , Matt Konda chaired the Board. The OWASP Foundation, a 501(c)(3) non-profit organization in the US established in 2004, supports the OWASP infrastructure and projects. Since 2011, OWASP is also registered as a non-profit organization in Belgium under the name of OWASP Europe VZW. In February 2023, it was reported by Bil Corry, a OWASP Foundation Global Board of Directors officer, on Twitter that the board had voted for renaming from the Open Web Application Security Project to its current name, replacing Web with Worldwide. Publications and resources OWASP Top Ten: The "Top Ten", first published in 2003, is regularly updated. It aims to raise awareness about application security by identifying some of the most critical risks facing organizations. Many standards, books, tools, and many organizations reference the Top 10 project, including MITRE, PCI DSS, the Defense Information Systems Agency (DISA-STIG), and the United States Federal Trade Commission (FTC), OWASP Software Assurance Maturity Model: The Software Assurance Maturity Model (SAMM) project's mission is to provide an effective and measurable way for all types of organizations to analyze and improve their software security posture. A core objective is to raise awareness and educate organizations on how to design, develop, and deploy secure software through a flexible self-assessment model. SAMM supports the complete software lifecycle and is technology and process agnostic. The SAMM model is designed to be evolutive and risk-driven in nature, acknowledging there is no single recipe that works for all organizations. OWASP Development Guide: The Development Guide provides practical guidance and includes J2EE, ASP.NET, and PHP code samples. The Development Guide covers an extensive array of application-level security issues, from SQL injection through modern concerns such as phishing, credit card handling, session fixation, cross-site request forgeries, compliance, and privacy issues. OWASP Testing Guide: The OWASP Testing Guide includes a "best practice" penetration testing framework that users can implement in their own organizations and a "low level" penetration testing guide that describes techniques for testing most common web application and web service security issues. Version 4 was published in September 2014, with input from 60 individuals. OWASP Code Review Guide: The code review guide is currently at release version 2.0, released in July 2017. OWASP Application Security Verification Standard (ASVS): A standard for performing application-level security verifications. OWASP XML Security Gateway (XSG) Evaluation Criteria Project. OWASP Top 10 Incident Response Guidance. This project provides a proactive approach to Incident Response planning. The intended audience of this document includes business owners to security engineers, developers, audit, program managers, law enforcement & legal council. OWASP ZAP Project: The Zed Attack Proxy (ZAP) is an easy to use integrated penetration testing tool for finding vulnerabilities in web applications. It is designed to be used by people with a wide range of security experience including developers and functional testers who are new to penetration testing. Webgoat: a deliberately insecure web application created by OWASP as a guide for secure programming practices. Once downloaded, the application comes with a tutorial and a set of different lessons that instruct students how to exploit vulnerabilities with the intention of teaching them how to write code securely. OWASP AppSec Pipeline: The Application Security (AppSec) Rugged DevOps Pipeline Project is a place to find information needed to increase the speed and automation of an application security program. AppSec Pipelines take the principles of DevOps and Lean and applies that to an application security program. OWASP Automated Threats to Web Applications: Published July 2015the OWASP Automated Threats to Web Applications Project aims to provide definitive information and other resources for architects, developers, testers and others to help defend against automated threats such as credential stuffing. The project outlines the top 20 automated threats as defined by OWASP. OWASP API Security Project: focuses on strategies and solutions to understand and mitigate the unique vulnerabilities and security risks of Application Programming Interfaces (APIs). Includes the most recent list API Security Top 10 2023. Certifications They have several certification schemes to certify the knowledge of students in particular areas of security. Security Fundamentals Baseline set of security standards applicable across technology stacks teaching learners about the OWASP top ten vulnerabilities. A01:2021 Broken Access Controls A02:2021 Cryptographic Failures A03:2021 Injection A04:2021 Insecure Design A05:2021 Security Misconfigurationimproper configuration of security settings, permissions, and controls that can lead to vulnerabilities A06:2021 Vulnerable and Outdated Components A07:2021 Identification and Authentication Failures A08:2021 Software and Data Integrity Failures A09:2021 Security Logging and Monitoring Failures A10:2021 Server-side request forgery (SSRF)caused by a web application fetching a remote resource without validating the user-supplied URL Awards The OWASP organization received the 2014 Haymarket Media Group SC Magazine Editor's Choice award. See also Open Source Security Foundation References External links Computer security organizations Computer standards 501(c)(3) organizations Non-profit organisations based in Belgium Organizations established in 2001 2001 establishments in Belgium
OWASP
[ "Technology" ]
1,256
[ "Computer standards" ]
2,123,123
https://en.wikipedia.org/wiki/Axiomatic%20foundations%20of%20topological%20spaces
In the mathematical field of topology, a topological space is usually defined by declaring its open sets. However, this is not necessary, as there are many equivalent axiomatic foundations, each leading to exactly the same concept. For instance, a topological space determines a class of closed sets, of closure and interior operators, and of convergence of various types of objects. Each of these can instead be taken as the primary class of objects, with all of the others (including the class of open sets) directly determined from that new starting point. For example, in Kazimierz Kuratowski's well-known textbook on point-set topology, a topological space is defined as a set together with a certain type of "closure operator," and all other concepts are derived therefrom. Likewise, the neighborhood-based axioms (in the context of Hausdorff spaces) can be retraced to Felix Hausdorff's original definition of a topological space in Grundzüge der Mengenlehre. Many different textbooks use many different inter-dependences of concepts to develop point-set topology. The result is always the same collection of objects: open sets, closed sets, and so on. For many practical purposes, the question of which foundation is chosen is irrelevant, as long as the meaning and interrelation between objects (many of which are given in this article), which are the same regardless of choice of development, are understood. However, there are cases where it can be useful to have flexibility. For instance, there are various natural notions of convergence of measures, and it is not immediately clear whether they arise from a topological structure or not. Such questions are greatly clarified by the topological axioms based on convergence. Standard definitions via open sets A topological space is a set together with a collection of subsets of satisfying: The empty set and are in The union of any collection of sets in is also in The intersection of any pair of sets in is also in Equivalently, the intersection of any finite collection of sets in is also in Given a topological space one refers to the elements of as the open sets of and it is common only to refer to in this way, or by the label topology. Then one makes the following secondary definitions: Given a second topological space a function is said to be continuous if and only if for every open subset of one has that is an open subset of A subset of is closed if and only if its complement is open. Given a subset of the closure is the set of all points such that any open set containing such a point must intersect Given a subset of the interior is the union of all open sets contained in Given an element of one says that a subset is a neighborhood of if and only if is contained in an open subset of which is also a subset of Some textbooks use "neighborhood of " to instead refer to an open set containing One says that a net converges to a point of if for any open set containing the net is eventually contained in Given a set a filter is a collection of nonempty subsets of that is closed under finite intersection and under supersets. Some textbooks allow a filter to contain the empty set, and reserve the name "proper filter" for the case in which it is excluded. A topology on defines a notion of a filter converging to a point of by requiring that any open set containing is an element of the filter. Given a set a filterbase is a collection of nonempty subsets such that every two subsets intersect nontrivially and contain a third subset in the intersection. Given a topology on one says that a filterbase converges to a point if every neighborhood of contains some element of the filterbase. Definition via closed sets Let be a topological space. According to De Morgan's laws, the collection of closed sets satisfies the following properties: The empty set and are elements of The intersection of any collection of sets in is also in The union of any pair of sets in is also in Now suppose that is only a set. Given any collection of subsets of which satisfy the above axioms, the corresponding set is a topology on and it is the only topology on for which is the corresponding collection of closed sets. This is to say that a topology can be defined by declaring the closed sets. As such, one can rephrase all definitions to be in terms of closed sets: Given a second topological space a function is continuous if and only if for every closed subset of the set is closed as a subset of a subset of is open if and only if its complement is closed. given a subset of the closure is the intersection of all closed sets containing given a subset of the interior is the complement of the intersection of all closed sets containing Definition via closure operators Given a topological space the closure can be considered as a map where denotes the power set of One has the following Kuratowski closure axioms: If is a set equipped with a mapping satisfying the above properties, then the set of all possible outputs of cl satisfies the previous axioms for closed sets, and hence defines a topology; it is the unique topology whose associated closure operator coincides with the given cl. As before, it follows that on a topological space all definitions can be phrased in terms of the closure operator: Given a second topological space a function is continuous if and only if for every subset of one has that the set is a subset of A subset of is open if and only if A subset of is closed if and only if Given a subset of the interior is the complement of Definition via interior operators Given a topological space the interior can be considered as a map where denotes the power set of It satisfies the following conditions: If is a set equipped with a mapping satisfying the above properties, then the set of all possible outputs of int satisfies the previous axioms for open sets, and hence defines a topology; it is the unique topology whose associated interior operator coincides with the given int. It follows that on a topological space all definitions can be phrased in terms of the interior operator, for instance: Given topological spaces and a function is continuous if and only if for every subset of one has that the set is a subset of A set is open if and only if it equals its interior. The closure of a set is the complement of the interior of its complement. Definition via exterior operators Given a topological space the exterior can be considered as a map where denotes the power set of It satisfies the following conditions: If is a set equipped with a mapping satisfying the above properties, then we can define the interior operator and vice versa. More precisely, if we define , satisfies the interior operator axioms, and hence defines a topology. Conversely, if we define , satisfies the above axioms. Moreover, these correspondence is 1-1. It follows that on a topological space all definitions can be phrased in terms of the exterior operator, for instance: The closure of a set is the complement of its exterior, . Given a second topological space a function is continuous if and only if for every subset of one has that the set is a subset of Equivalently, is continuous if and only if for every subset of one has that the set is a subset of A set is open if and only if it equals the exterior of its complement. A set is closed if and only if it equals the complement of its exterior. Definition via boundary operators Given a topological space the boundary can be considered as a map where denotes the power set of It satisfies the following conditions: If is a set equipped with a mapping satisfying the above properties, then we can define closure operator and vice versa. More precisely, if we define , satisfies closure axioms, and hence boundary operation defines a topology. Conversely, if we define , satisfies above axioms. Moreover, these correspondence is 1-1. It follows that on a topological space all definitions can be phrased in terms of the boundary operator, for instance: A set is open if and only if . A set is closed if and only if . Definition via derived sets The derived set of a subset of a topological space is the set of all points that are limit points of that is, points such that every neighbourhood of contains a point of other than itself. The derived set of , denoted , satisfies the following conditions: For all Since a set is closed if and only if , the derived set uniquely defines a topology. It follows that on a topological space all definitions can be phrased in terms of derived sets, for instance: . Given topological spaces and a function is continuous if and only if for every subset of one has that the set is a subset of . Definition via neighbourhoods Recall that this article follows the convention that a neighborhood is not necessarily open. In a topological space, one has the following facts: If is a neighborhood of then is an element of The intersection of two neighborhoods of is a neighborhood of Equivalently, the intersection of finitely many neighborhoods of is a neighborhood of If contains a neighborhood of then is a neighborhood of If is a neighborhood of then there exists a neighborhood of such that is a neighborhood of each point of . If is a set and one declares a nonempty collection of neighborhoods for every point of satisfying the above conditions, then a topology is defined by declaring a set to be open if and only if it is a neighborhood of each of its points; it is the unique topology whose associated system of neighborhoods is as given. It follows that on a topological space all definitions can be phrased in terms of neighborhoods: Given another topological space a map is continuous if and only for every element of and every neighborhood of the preimage is a neighborhood of A subset of is open if and only if it is a neighborhood of each of its points. Given a subset of the interior is the collection of all elements of such that is a neighbourhood of . Given a subset of the closure is the collection of all elements of such that every neighborhood of intersects Definition via convergence of nets Convergence of nets satisfies the following properties: Every constant net converges to itself. Every subnet of a convergent net converges to the same limits. If a net does not converge to a point then there is a subnet such that no further subnet converges to Equivalently, if is a net such that every one of its subnets has a sub-subnet that converges to a point then converges to /Convergence of iterated limits. If in and for every index is a net that converges to in then there exists a diagonal (sub)net of that converges to A refers to any subnet of The : notation denotes the net defined by whose domain is the set ordered lexicographically first by and then by explicitly, given any two pairs declare that holds if and only if both (1) and also (2) if then If is a set, then given a notion of net convergence (telling what nets converge to what points) satisfying the above four axioms, a closure operator on is defined by sending any given set to the set of all limits of all nets valued in the corresponding topology is the unique topology inducing the given convergences of nets to points. Given a subset of a topological space is open in if and only if every net converging to an element of is eventually contained in the closure of in is the set of all limits of all convergent nets valued in is closed in if and only if there does not exist a net in that converges to an element of the complement A subset is closed in if and only if every limit point of every convergent net in necessarily belongs to A function between two topological spaces is continuous if and only if for every and every net in that converges to in the net converges to in Definition via convergence of filters A topology can also be defined on a set by declaring which filters converge to which points. One has the following characterizations of standard objects in terms of filters and prefilters (also known as filterbases): Given a second topological space a function is continuous if and only if it preserves convergence of prefilters. A subset of is open if and only if every filter converging to an element of contains A subset of is closed if and only if there does not exist a prefilter on which converges to a point in the complement Given a subset of the closure consists of all points for which there is a prefilter on converging to A subset of is a neighborhood of if and only if it is an element of every filter converging to See also Citations Notes References Categories in category theory General topology
Axiomatic foundations of topological spaces
[ "Mathematics" ]
2,570
[ "General topology", "Mathematical structures", "Topology", "Category theory", "Categories in category theory" ]
24,618,690
https://en.wikipedia.org/wiki/Circle%20grid%20analysis
Circle grid analysis (CGA), also known as circle grid strain analysis, is a method of measuring the strain levels of sheet metal after a part is formed by stamping or drawing. The name itself is a fairly accurate description of the process. Literally, a grid of circles of known diameter is etched to the surface of the sheet metal to be formed. After the part is formed, the circles have been stretched into ellipses. By measuring the longest part of the ellipse (called the “major strain”) and the shortest part of the ellipse (called the “minor strain”), it is possible to determine how close any stamped part is to splitting or fracturing. Application The goal of using circle grid strain analysis is to predict potential problems before they become problems. Once you have a forming problem, chances are circle grid analysis won’t be able to help you, unless it’s intermittent enough to form a “good” part from time to time. See also Forming limit diagram Crankshaft deep rolling References Investigation of Forming Limit Curves of Various Sheet Materials Using Hydraulic Bulge Testing With Analytical, Experimental and FEA Techniques Introduction to Circle Grid Strain Analysis and Thickness Strain Analysis External links Industrial Forging Metal forming Mechanical engineering
Circle grid analysis
[ "Physics", "Engineering" ]
251
[ "Mechanical engineering stubs", "Applied and interdisciplinary physics", "Mechanical engineering" ]
24,620,634
https://en.wikipedia.org/wiki/Drainage%20equation
A drainage equation is an equation describing the relation between depth and spacing of parallel subsurface drains, depth of the watertable, depth and hydraulic conductivity of the soils. It is used in drainage design. A well known steady-state drainage equation is the Hooghoudt drain spacing equation. Its original publication is in Dutch. The equation was introduced in the USA by van Schilfgaarde. Hooghoudt's equation Hooghoudt's equation can be written as:. Q L2 = 8 Kb d (Dd - Dw) + 4 Ka (Dd - Dw)2 where: Q = steady state drainage discharge rate (m/day) Ka = hydraulic conductivity of the soil above drain level (m/day) Kb = hydraulic conductivity of the soil below drain level (m/day) Di = depth of the impermeable layer below drain level (m) Dd = depth of the drains (m) Dw = steady state depth of the watertable midway between the drains (m) L = spacing between the drains (m) d = equivalent depth, a function of L, (Di-Dd), and r r = drain radius (m) Steady (equilibrium) state condition In steady state, the level of the water table remains constant and the discharge rate (Q) equals the rate of groundwater recharge (R), i.e. the amount of water entering the groundwater through the watertable per unit of time. By considering a long-term (e.g. seasonal) average depth of the water table (Dw) in combination with the long-term average recharge rate (R), the net storage of water in that period of time is negligibly small and the steady state condition is satisfied: one obtains a dynamic equilibrium. Derivation of the equation For the derivation of the equation Hooghoudt used the law of Darcy, the summation of circular potential functions and, for the determination of the influence of the impermeable layer, de method of mirror images and superposition. Hooghoudt published tables for the determination of the equivalent depth (d), because the function (F) in d = F (L,Di-Dd,r) consists of long series of terms. Determining: the discharge rate (Q) from the recharge rate (R) in a water balance as detailed in the article: hydrology (agriculture) the permissible long term average depth of the water table (Dw) on the basis of agricultural drainage criteria the soil's hydraulic conductivity (Ka and Kb) by measurements the depth of the bottom of the aquifer (Di) the design drain spacing (L) can be found from the equation in dependence of the drain depth (Dd) and drain radius (r). Drainage criteria One would not want the water table to be too shallow to avoid crop yield depression nor too deep to avoid drought conditions. This is a subject of drainage research. The figure shows that a seasonal average depth of the water table shallower than 70 cm causes a yield depression The figure was made with the SegReg program for segmented regression. Equivalent depth In 1991 a closed-form expression was developed for the equivalent depth (d) that can replace the Hooghoudt tables: d = πL / 8 { ln(L/πr) + F(x) } where: x = 2π (Di − Dd) / L F(x) = Σ 4e−2nx / n (1 − e−2nx), with n = 1, 3, 5, . . . Extended use Theoretically, Hooghoudt's equation can also be used for sloping land. The theory on drainage of sloping land is corroborated by the results of sand tank experiments. In addition, the entrance resistance encountered by the water upon entering the drains can be accounted for. Amplification The drainage formula can be amplified to account for (see figure on the right): the additional energy associated with the incoming percolation water (recharge), see groundwater energy balance multiple soil layers anisotropic hydraulic conductivity, the vertical conductivity (Kv) being different from the horizontal (Kh) drains of different dimensions with any width (W) Computer program The amplified drainage equation uses an hydraulic equivalent of Joule's law in electricity. It is in the form of a differential equation that cannot be solved analytically (i.e. in a closed form) but the solution requires a numerical method for which a computer program is indispensable. The availability of a computer program also helps in quickly assessing various alternatives and performing a sensitivity analysis. The blue figure shows an example of results of a computer aided calculation with the amplified drainage equation using the EnDrain program. It shows that incorporation of the incoming energy associated with the recharge leads to a somewhat deeper water table. References External links Drainage Hooghoudt's equation calculator Drainage Hydrology Hydraulic engineering Soil Soil science Soil physics Agricultural soil science
Drainage equation
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
1,049
[ "Hydrology", "Applied and interdisciplinary physics", "Soil physics", "Physical systems", "Hydraulics", "Civil engineering", "Environmental engineering", "Hydraulic engineering" ]
24,622,320
https://en.wikipedia.org/wiki/Cryo%20bio-crystallography
Cryo bio-crystallography is the application of crystallography to biological macromolecules at cryogenic temperatures. Basic principles Cryo crystallography enables X-ray data collection at cryogenic temperatures, typically 100 K. Crystals are transferred from the solution they have grown in (called mother liquor) to a solution with a cryo-protectant to prevent ice formation. Crystals are mounted in a glass fiber (as opposed to a capillary.) Crystals are cooled by dipping directly into liquid nitrogen and then placed in a cryo cold stream. Cryo cooled macromolecular crystals show reduced radiation damage by more than 70 times that at room temperature. Advantages Significant improvement of resolution in data collection Reduced or eliminated radiation damage in crystals Usefulness and applications Crystallography of large biological macromolecules can be achieved while maintaining their solution state. The best known example is the ribosome. Today, liquid nitrogen cryo cooling is used for protein crystallography at every synchrotron around the world. Radiation damaged is reduced by more than 70 fold at cryo temperatures. A recent review paper explains the development of reduced radiation damage in macromolecular crystals at Synchrotrons and describes how more than 90% of all structures deposited in the Protein Data Bank used cryo cooling in their determination. 2020 Haas, DJ. The early history of cryo-cooling for macromolecular crystallography (2020) IUCrJ (2020). 7, 148–157. https://journals.iucr.org/m/issues/2020/02/00/be5283/be5283.pdf 1970 Haas, D.J., and Rossmann, M.G. Crystallographic Studies on Lactate Dehydrogenase at -75 C. Acta Crystallogr. (1970), B26, 998. See also X-ray crystallography Cryo-EM Ada Yonath References Crystallography
Cryo bio-crystallography
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
409
[ "Materials science stubs", "Materials science", "Crystallography stubs", "Crystallography", "Condensed matter physics" ]
24,629,991
https://en.wikipedia.org/wiki/Zener%E2%80%93Hollomon%20parameter
In materials science, the Zener–Hollomon parameter, typically denoted as Z, is used to relate changes in temperature or strain-rate to the stress-strain behavior of a material. It has been most extensively applied to the forming of steels at increased temperature, when creep is active. It is given by where is the strain rate, Q is the activation energy, R is the gas constant, and T is the temperature. The Zener–Hollomon parameter is also known as the temperature compensated strain rate, since the two are inversely proportional in the definition. It is named after Clarence Zener and John Herbert Hollomon, Jr. who established the formula based on the stress-strain behavior in steel. When plastically deforming a material, the flow stress depends heavily on both the strain-rate and temperature. During forming processes, Z may help determine appropriate changes in strain-rate or temperature when the other variable is altered, in order to keep material flowing properly. Z has also been applied to some metals over a large range of strain rates and temperatures and shown comparable microstructures at the end-of-processing, as long as Z remained similar. This is because the relative activity of various deformation mechanisms is typically inversely proportional to temperature or strain-rate, such that decreasing strain rate or increasing temperature will increase Z and promote plastic deformation. See also Hollomon–Jaffe parameter References Metallurgy
Zener–Hollomon parameter
[ "Chemistry", "Materials_science", "Engineering" ]
294
[ "Metallurgy", "Materials science", "nan" ]
30,727,051
https://en.wikipedia.org/wiki/Stone%E2%80%93%C4%8Cech%20remainder
In mathematics, the Stone–Čech remainder of a topological space X, also called the corona or corona set, is the complement of the space in its Stone–Čech compactification βX. A topological space is said to be σ-compact if it is the union of countably many compact subspaces, and locally compact if every point has a neighbourhood with compact closure. The Stone–Čech remainder of a σ-compact and locally compact Hausdorff space is a sub-Stonean space, i.e., any two open σ-compact disjoint subsets have disjoint compact closures. See also Corona theorem Corona algebra, a non-commutative analogue of the corona set. References Topology
Stone–Čech remainder
[ "Physics", "Mathematics" ]
152
[ "Spacetime", "Topology", "Space", "Geometry" ]