id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
21,788,233 | https://en.wikipedia.org/wiki/GEOHAB | GEOHAB is an international research programme on the Global Ecology and Oceanography of Harmful algal blooms.
It was initiated in 1998 by the Scientific Committee on Oceanic Research (of ICSU) and the Intergovernmental Oceanographic Commission of UNESCO.
References
External links
GEOHAB Website
Algal blooms
UNESCO | GEOHAB | [
"Chemistry",
"Biology",
"Environmental_science"
] | 61 | [
"Algae",
"Water treatment",
"Water pollution",
"Water quality indicators",
"Algal blooms"
] |
21,789,178 | https://en.wikipedia.org/wiki/River%20mouth | A river mouth is where a river flows into a larger body of water, such as another river, a lake/reservoir, a bay/gulf, a sea, or an ocean. At the river mouth, sediments are often deposited due to the slowing of the current, reducing the carrying capacity of the water.
The water from a river can enter the receiving body in a variety of different ways. The motion of a river is influenced by the relative density of the river compared to the receiving water, the rotation of the Earth, and any ambient motion in the receiving water, such as tides or seiches.
If the river water has a higher density than the surface of the receiving water, the river water will plunge below the surface. The river water will then either form an underflow or an interflow within the lake. However, if the river water is lighter than the receiving water, as is typically the case when fresh river water flows into the sea, the river water will float along the surface of the receiving water as an overflow.
Alongside these advective transports, inflowing water will also diffuse.
Landforms
At the mouth of a river, the change in flow conditions can cause the river to drop any sediment it is carrying. This sediment deposition can generate a variety of landforms, such as deltas, sand bars, spits, and tie channels. Landforms at the river mouth drastically alter the geomorphology and ecosystem. Along coasts, sand bars and similar landforms act as barriers, sheltering sensitive ecosystems that are enriched by nutrients deposited from the river. However, the damming of rivers can starve the river of sand and nutrients, creating a deficit at the river's mouth.
Cultural influence
As river mouths are the site of large-scale sediment deposition and allow for easy travel and ports, many towns and cities are founded there. Many places in the United Kingdom take their names from their positions at the mouths of rivers, such as Plymouth (i.e. mouth of the Plym River), Sidmouth (i.e. mouth of the Sid River), and Great Yarmouth (i.e. mouth of the Yare River); in Celtic, the term is Aber or Inver. Due to rising sea levels as a result of climate change, the coastal cities are at heightened risk of flooding. Sediment starvation in the river compounds this concern.
See also
Confluence
Estuary
Liman
References
Fluid dynamics
Fluvial landforms
Mouth | River mouth | [
"Chemistry",
"Engineering"
] | 500 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
21,789,230 | https://en.wikipedia.org/wiki/Associative%20substitution | Associative substitution describes a pathway by which compounds interchange ligands. The terminology is typically applied to organometallic and coordination complexes, but resembles the Sn2 mechanism in organic chemistry. The opposite pathway is dissociative substitution, being analogous to the Sn1 pathway. Intermediate pathways exist between the pure associative and pure dissociative pathways, these are called interchange mechanisms.
Associative pathways are characterized by binding of the attacking nucleophile to give a discrete, detectable intermediate followed by loss of another ligand. Complexes that undergo associative substitution are either coordinatively unsaturated or contain a ligand that can change its bonding to the metal, e.g. change in hapticity or bending of a nitrogen oxide ligand (NO). In homogeneous catalysis, the associative pathway is desirable because the binding event, and hence the selectivity of the reaction, depends not only on the nature of the metal catalyst but also on the substrate.
Examples of associative mechanisms are commonly found in the chemistry of 16e square planar metal complexes, e.g. Vaska's complex and tetrachloroplatinate. These compounds (MX4) bind the incoming (substituting) ligand Y to form pentacoordinate intermediates MX4Y that in a subsequent step dissociates one of their ligands. Dissociation of Y results in no detectable net reaction, but dissociation of X results in net substitution, giving the 16e complex MX3Y. The first step is typically rate determining. Thus, the entropy of activation is negative, which indicates an increase in order in the system. These reactions follow second order kinetics: the rate of the appearance of product depends on the concentration of MX4 and Y. The rate law is governed by the Eigen–Wilkins Mechanism.
Associative interchange pathway
In many substitution reactions, well-defined intermediates are not observed, when the rate of such processes are influenced by the nature of the entering ligand, the pathway is called associative interchange, abbreviated Ia. Representative is the interchange of bulk and coordinated water in [V(H2O)6]2+. In contrast, the slightly more compact ion [Ni(H2O)6]2+ exchanges water via the Id.
Effects of ion pairing
Polycationic complexes tend to form ion pairs with anions and these ion pairs often undergo reactions via the Ia pathway. The electrostatically held nucleophile can exchange positions with a ligand in the first coordination sphere, resulting in net substitution. An illustrative process comes from the "anation" (reaction with an anion) of chromium(III) hexaaquo complex:
[Cr(H2O)6]3+ + SCN− {[Cr(H2O)6], NCS}2+
{[Cr(H2O)6], NCS}2+ [Cr(H2O)5NCS]2+ + H2O
Special ligand effects
In special situations, some ligands participate in substitution reactions leading to associative pathways. These ligands can adopt multiple motifs for binding to the metal, each of which involves a different number of electrons "donated." A classic case is the indenyl effect in which an indenyl ligand reversibly "slips' from pentahapto (η5) coordination to trihapto (η3). Other pi-ligands behave in this way, e.g. allyl (η3 to η1) and naphthalene (η6 to η4). Nitric oxide typically binds to metals to make a linear MNO arrangement, wherein the nitrogen oxide is said to donate 3e− to the metal. In the course of substitution reactions, the MNO unit can bend, converting the 3e− linear NO ligand to a 1e− bent NO ligand.
SN1cB mechanism
The rate for the hydrolysis of cobalt(III) ammine halide complexes are deceptive, appearing to be associative but proceeding by an alternative pathway. The hydrolysis of [Co(NH3)5Cl]2+ follows second order kinetics: the rate increases linearly with concentration of hydroxide as well as the starting complex. Based on this information, the reactions would appear to proceed via nucleophilic attack of hydroxide at cobalt. Studies show, however, that the hydroxide deprotonates one NH3 ligand to give the conjugate base of the starting complex, i.e., [Co(NH3)4(NH2)Cl]+. In this monovalent cation, the chloride spontaneously dissociates. This pathway is called the SN1cB mechanism.
Eigen-Wilkins mechanism
The Eigen-Wilkins mechanism, named after chemists Manfred Eigen and R. G. Wilkins, is a mechanism and rate law in coordination chemistry governing associative substitution reactions of octahedral complexes. It was discovered for substitution by ammonia of a chromium-(III) hexaaqua complex. The key feature of the mechanism is an initial rate-determining pre-equilibrium to form an encounter complex ML6-Y from reactant ML6 and incoming ligand Y. This equilibrium is represented by the constant KE:
ML6 + Y ML6-Y
The subsequent dissociation to form product is governed by a rate constant k:
ML6-Y → ML5Y + L
A simple derivation of the Eigen-Wilkins rate law follows:
[ML6-Y] = KE[ML6][Y]
[ML6-Y] = [M]tot - [ML6]
rate = k[ML6-Y]
rate = kKE[Y][ML6]
Leading to the final form of the rate law, using the steady-state approximation (d[ML6-Y] / dt = 0),
rate = kKE[Y][M]tot / (1 + KE[Y])
Eigen-Fuoss equation
A further insight into the pre-equilibrium step and its equilibrium constant KE comes from the Fuoss-Eigen equation proposed independently by Eigen and R. M. Fuoss:
KE = (4πa3/3000) x NAexp(-V/RT)
Where a represents the minimum distance of approach between complex and ligand in solution (in cm), NA is the Avogadro constant, R is the gas constant and T is the reaction temperature. V is the electrostatic potential energy of the ions at that distance:
V = z1z2e2/4πaε
Where z is the charge number of each species and ε is the vacuum permittivity.
A typical value for KE is 0.0202 dm3mol−1 for neutral particles at a distance of 200 pm. The result of the rate law is that at high concentrations of Y, the rate approximates k[M]tot while at low concentrations the result is kKE[M]tot[Y]. The Eigen-Fuoss equation shows that higher values of KE (and thus a faster pre-equilibrium) are obtained for large, oppositely-charged ions in solution.
References
Substitution reactions
Organometallic chemistry
Coordination chemistry
Chemical reactions
Reaction mechanisms | Associative substitution | [
"Chemistry"
] | 1,531 | [
"Reaction mechanisms",
"Coordination chemistry",
"nan",
"Physical organic chemistry",
"Chemical kinetics",
"Organometallic chemistry"
] |
21,789,530 | https://en.wikipedia.org/wiki/Pht01 | pHT01 is a plasmid used as a cloning vector for expressing proteins in Bacillus subtilis. It is 7,956 base pairs in length. pHT01 carries Pgrac, an artificial, strong, IPTG-inducible promoter consisting of the Bacillus subtilis groE promoter, a lac operator, and the gsiB ribosome binding site. It was first found on plasmid pNDH33. The plasmid also carries replication regions from the pMTLBs72. The plasmid also carries genes to confer resistance to ampicillin and chloramphenicol.
Plasmid pHT01 is generally stable in both B. subtilis and Escherichia coli, and can be used for protein expression in these host strains. pNDH33/pHT01 have been used to produce up to 16% of total protein output in B. subtilis. Pgrac100 is an improved version of Pgrac, which can produce up to 30% of total cellular proteins in B. subtilis.
References
External links
pHT01 on AddGene
pHT01 at MoBiTec
Biochemistry | Pht01 | [
"Chemistry",
"Biology"
] | 251 | [
"Biotechnology stubs",
"Molecular biology stubs",
"Biochemistry stubs",
"nan",
"Molecular biology",
"Biochemistry"
] |
46,291,555 | https://en.wikipedia.org/wiki/Elias%20C.%20Aifantis | Elias C. Aifantis (; born October 10, 1950) is professor at Aristotle University of Thessaloniki,
professor of Mechanical Engineering and Engineering Mechanics at Michigan Technological University since 1982.
He has held academic positions with the University of Illinois (1976–1980) and the University of Minnesota (1980–1982).
E. C. Aifantis received his diploma from the National Technical University of Athens and his Ph.D. from the University of Minnesota (1975).
He has more than 300 published papers in the areas of mechanics and materials science.
E. C. Aifantis is Editor-in-Chief of the Journal of Mechanical Behavior of Materials, and is on the Advisory Board of Mechanics of Cohesive Friction Materials and Structure. He was on the Board of Acta Mechanica.
In 2015, he was awarded with the Fray International Sustainability Award in Antalya, Turkey, for his significant achievements in sustainable research and academia.
See also
GRADELA
References
E. C. Aifantis in ResearchGate
E. C. Aifantis in MTU
Elias Aifantis in LinkedIn
CV of E. C. Aifantis
Aifantis International Symposium (4 - 9 October 2015, Antalya, Turkey)
Notes
Selected papers
E.C. Aifantis, "On the role of gradients in the localization of deformation and fracture", International Journal of Engineering Science. Vol.30. No.10. (1992) 1279–1299.
B.S. Altan, E.C. Aifantis, "On the structure of the mode-Ill crack-tip in gradient elasticity", Scripta Metallurgica et Materialia. Vol.26. No.2. (1992) 319–324.
B.S. Altan, E.C. Aifantis, "On some aspects in the special theory of gradient elasticity", Journal of the Mechanical Behaviour of Materials. Vol.8. No.3. (1997) 231-282.
C.Q. Ru, E.C. Aifantis, "A simple approach to solve boundary-value problems in gradient elasticity", Acta Mechanica. Vol.101. No.1. (1993) 59-68.
M.Yu. Gutkin, E.C. Aifantis, "Dislocations and disclinations in gradient elasticity", Physica Status Solidi B. Vol.214. No.2. (1999) 245-286.
H. Askes, I. Morata, E. Aifantis, "Finite element analysis with staggered gradient elasticity", Computers and Structures. Vol.86. No.11-12. (2008) 1266–1279.
H. Askes, E.C. Aifantis, "Gradient elasticity in statics and dynamics: An overview of formulations, length scale identification procedures, finite element implementations and new results", International Journal of Solids and Structures. Vol.48. No.13. (2011) 1962-1990.
Academic staff of the Aristotle University of Thessaloniki
Michigan Technological University faculty
1950 births
Living people
Mechanical engineers
Materials scientists and engineers
Engineers from Thessaloniki
Aristotle University of Thessaloniki alumni | Elias C. Aifantis | [
"Materials_science",
"Engineering"
] | 657 | [
"Mechanical engineers",
"Materials scientists and engineers",
"Materials science",
"Mechanical engineering"
] |
46,291,906 | https://en.wikipedia.org/wiki/GRADELA | GRADELA is a simple gradient elasticity model involving one internal length in addition to the two Lamé parameters. It allows eliminating elastic singularities and discontinuities and to interpret elastic size effects. This model has been suggested by Elias C. Aifantis. The main advantage of GRADELA over Mindlin's elasticity models (which contains five extra constants) is the fact that solutions of boundary value problems can be found in terms of corresponding solutions of classical elasticity by operator splitting method.
In 1992-1993 it has been suggested by Elias C. Aifantis a generalization of the linear elastic constitutive relations by the gradient modification that contains the Laplacian in the form
where is the scale parameter.
See also
Linear elasticity
Mindlin–Reissner plate theory
References
E. C. Aifantis, "On the role of gradients in the localization of deformation and fracture" International Journal of Engineering Science Volume 30, Issue 10, October 1992, Pages 1279–1299
E. C. Aifantis, "On non-singular GRADELA crack fields" Theor. Appl. Mech. Lett. 2014, Vol. 4 Issue (5): 5-051005 DOI: 10.1063/2.1405105
E. C. Aifantis, "On the gradient approach – Relation to Eringen’s nonlocal theory" International Journal of Engineering Science Volume 49, Issue 12, December 2011, Pages 1367–1377
C. Q. Ru, E. C.Aifantis, "A simple approach to solve boundary value problems in gradient elasticity. Acta Mechanica, 1993, Volume 101, Issue 1-4, pp 59-68.
Elasticity (physics)
Solid mechanics | GRADELA | [
"Physics",
"Materials_science"
] | 364 | [
"Solid mechanics",
"Physical phenomena",
"Elasticity (physics)",
"Deformation (mechanics)",
"Mechanics",
"Physical properties"
] |
46,294,069 | https://en.wikipedia.org/wiki/3%20Piscis%20Austrini | 3 Piscis Austrini, also known as HD 201901 or simply 3 PsA, is an astrometric binary (100% chance) located in the southern constellation Microscopium. It was once part of Piscis Austrinus, the southern fish. The system has a combined apparent magnitude of 5.39, making it faintly visible to the naked eye under ideal conditions. Gaia DR3 parallax measurements imply a distance of 404 light years and it is currently approaching the Solar System with a heliocentric radial velocity of . At its current distance, 3 PsA's brightness is diminished by 0.12 magnitudes due to extinction from interstellar dust and it has an absolute magnitude of +0.19.
The visible component is an evolved red giant with a stellar classification of K3 III. The interferometry-measured angular diameter of the star, after correcting for limb darkening, is , which, at its estimated distance, equates to a physical radius of about 20 times the radius of the Sun. However, its actual empirical radius is . It has 1.58 times the mass of the Sun and is radiating 184 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of . 3 PsA is metal deficient with an iron abundance 68% that of the Sun ([Fe/H] = −0.17) and it spins too slowly for its projected rotational velocity to be measured accurately. It is estimated to be 2.59 billion years old based on Gaia DR3 models.
References
K-type giants
Astrometric binaries
Microscopium
CD-28 17178
Piscis Austrini, 03
201901
104750
8110
Microscopium, 58 | 3 Piscis Austrini | [
"Astronomy"
] | 360 | [
"Microscopium",
"Constellations"
] |
46,298,670 | https://en.wikipedia.org/wiki/Hafnium%20tetrabromide | Hafnium tetrabromide is the inorganic compound with the formula HfBr4. It is the most common bromide of hafnium. It is a colorless, diamagnetic moisture sensitive solid that sublimes in vacuum. It adopts a structure very similar to that of zirconium tetrabromide, featuring tetrahedral Hf centers, in contrast to the polymeric nature of hafnium tetrachloride.
References
Bromides
Hafnium compounds
Metal halides | Hafnium tetrabromide | [
"Chemistry"
] | 107 | [
"Bromides",
"Inorganic compounds",
"Metal halides",
"Salts"
] |
40,201,057 | https://en.wikipedia.org/wiki/Arrott%20plot | In condensed matter physics, an Arrott plot is a plot of the square of the magnetization of a substance, against the ratio of the applied magnetic field to magnetization at one (or several) fixed temperature(s). Arrott plots are an easy way of determining the presence of ferromagnetic order in a material. They are named after American physicist Anthony Arrott who introduced them as a technique for studying magnetism in 1957.
Details
According to the Landau theory applied to the mean field picture for magnetism, the free energy of a ferromagnetic material close to a phase transition can be written as:
where , the magnetization, is the order parameter, is the applied magnetic field, is the critical temperature, and are material constants.
Close to the phase transition, this gives a relation for the magnetization order parameter:
where is a dimensionless measure of the temperature.
Thus in a graph plotting vs. for various temperatures, the line without an intercept corresponds to the dependence at the critical temperature. Thus along with providing evidence for the existence of a ferromagnetic phase, the Arrott plot can also be used to determine the critical temperature for the phase transition.
Generalization
Giving the critical exponents explicitly in the equation of state, Arrott and Noakes proposed:
Where are free parameters. In these modified Arrott plots, data is plotted as versus . In the case of classical Landau theory, and and this equation reduces to the linear versus plot. However, the equation also allows for other values of and , since real ferromagnets often do not have critical exponents exactly consistent with a simple mean field theory ferromagnetism.
The use of the correct critical exponents for a given system can help give straight lines on the Arott plot, but not in cases such as low magnetic field and amorphous materials. While mean field theory is a more reasonable model for ferromagnets at higher magnetic fields, the presence of more than one magnetic domain in real magnets means that especially at low magnetic fields, the experimentally measured macroscopic magnetic field (which is an average over the whole sample) will not be a reasonable way to determine the local magnetic field (which is felt by a single atom). Therefore, magnetization data taken at low magnetic fields should be ignored for the purposes of Arrott plots.
Transition order
Magnetic phase transitions can be either first order or second order. The nature of the transition can be inferred from the Arrott plot based on the slope of the magnetic isotherms. If the lines are all positive slope, the phase transition is second order, whereas if there are negative slope lines, the phase transition is first order. This condition is known as the Banerjee criterion.
The Banerjee criterion is not always accurate for evaluating inhomogeneous ferromagnets, since the slopes can all be positive even when the transition is first-order.
See also
Curie–Weiss law
References
Electric and magnetic fields in matter
Magnetic ordering | Arrott plot | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 626 | [
"Magnetic ordering",
"Condensed matter physics",
"Electric and magnetic fields in matter",
"Materials science"
] |
40,207,657 | https://en.wikipedia.org/wiki/Brezis%E2%80%93Gallou%C3%ABt%20inequality | In mathematical analysis, the Brezis–Gallouët inequality, named after Haïm Brezis and Thierry Gallouët, is an inequality valid in 2 spatial dimensions. It shows that a function of two variables which is sufficiently smooth is (essentially) bounded, and provides an explicit bound, which depends only logarithmically on the second derivatives. It is useful in the study of partial differential equations.
Let be the exterior or the interior of a bounded domain with regular boundary, or itself. Then the Brezis–Gallouët inequality states that there exists a real only depending on such that, for all which is not a.e. equal to 0,
Noticing that, for any , there holds
one deduces from the Brezis-Gallouet inequality that there exists only depending on such that, for all which is not a.e. equal to 0,
The previous inequality is close to the way that the Brezis-Gallouet inequality is cited in.
See also
Ladyzhenskaya inequality
Agmon's inequality
References
Theorems in analysis
Inequalities | Brezis–Gallouët inequality | [
"Mathematics"
] | 221 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Binary relations",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems",
"Mathematical theorems"
] |
36,027,149 | https://en.wikipedia.org/wiki/Weizenbaum%20Award | The Weizenbaum Award was established in 2008 by the International Society for Ethics and Information Technology (INSEIT). It is given every two years by INSEIT's adjudication committee to an individual who has “made a significant contribution to the field of information and computer ethics, through his or her research, service, and vision.”
It is officially named the 'INSEIT/ Joseph Weizenbaum Award in Information and Computer Ethics', "in recognition of Joseph Weizenbaum’s groundbreaking and highly influential work in computer ethics in the 1970s, which helped to shape the field as we know it today".
Winners
The Award has been won by:
2022: Philip Brey, to be awarded in CEPE 2023 in Chicago
2020: Rafael Capurro, awarded in CEPE/IACAP 2021, Hamburg
2019: Herman Tavani, awarded in CEPE 2019, in Norfolk Virginia
2017: James Moor, in CEPE ETHICOMP 2017
2015: Deborah G. Johnson
2013: Luciano Floridi
2011: Keith W. Miller
2010: Donald Gotterbarn
2009: Terrell Ward Bynum. Abstract of Bynum Weizenbaum address, given at CEPE 2009
See also
List of computer science awards
References
External links
See the website of INSEIT, https://inseit.net, with all details of the award.
Documentary film about Joseph Weizenbaum ( "WEIZENBAUM. Rebel at Work." )
Philosophy awards
Computer science awards
Information science awards | Weizenbaum Award | [
"Technology"
] | 303 | [
"Science and technology awards",
"Computer science",
"Information science awards",
"Computer science awards"
] |
36,028,317 | https://en.wikipedia.org/wiki/Silicate%20perovskite | Silicate perovskite is either (the magnesium end-member is called bridgmanite) or (calcium silicate known as davemaoite) when arranged in a perovskite structure. Silicate perovskites are not stable at Earth's surface, and mainly exist in the lower part of Earth's mantle, between about depth. They are thought to form the main mineral phases of the lower mantle, together with ferropericlase.
Discovery
The existence of silicate perovskite in the mantle was first suggested in 1962, and both and had been synthesized experimentally before 1975. By the late 1970s, it had been proposed that the seismic discontinuity at about 660 km in the mantle represented a change from spinel structure minerals with an olivine composition to silicate perovskite with ferropericlase.
Natural silicate perovskite was discovered in the heavily shocked Tenham meteorite. In 2014, the Commission on New Minerals, Nomenclature and Classification (CNMNC) of the International Mineralogical Association (IMA) approved the name bridgmanite for perovskite-structured , in honor of physicist Percy Bridgman, who was awarded the Nobel Prize in Physics in 1946 for his high-pressure research.
In 2021 perovskite-structured was found as an inclusion in a natural diamond. The name davemaoite has been adopted for this mineral.
Structure
The perovskite structure (first identified in the mineral perovskite) occurs in substances with the general formula , where A is a metal that forms large cations, typically magnesium, ferrous iron, or calcium. B is another metal that forms smaller cations, typically silicon, although minor amounts of ferric iron and aluminum can occur. X is typically oxygen. The structure may be cubic, but only if the relative sizes of the ions meet strict criteria. Typically, substances with the perovskite structure show lower symmetry, owing to the distortion of the crystal lattice and silicate perovskites are in the orthorhombic crystal system.
Occurrence
Stability range
Bridgmanite is a high-pressure polymorph of enstatite, but in the Earth predominantly forms, along with ferropericlase, from the decomposition of ringwoodite (a high-pressure form of olivine) at approximately 660 km depth, or a pressure of about 24 GPa. The depth of this transition depends on the mantle temperature; it occurs slightly deeper in colder regions of the mantle and shallower in warmer regions. The transition from ringwoodite to bridgmanite and ferropericlase marks the bottom of the mantle transition zone and the top of the lower mantle. Bridgmanite becomes unstable at a depth of approximately 2700 km, transforming isochemically to post-perovskite.
Calcium silicate perovskite is stable at slightly shallower depths than bridgmanite, becoming stable at approximately 500 km, and remains stable throughout the lower mantle.
Abundance
Bridgmanite is the most abundant mineral in the mantle. The proportions of bridgmanite and calcium perovskite depends on the overall lithology and bulk composition. In pyrolitic and harzburgitic lithogies, bridgmanite constitutes around 80% of the mineral assemblage, and calcium perovskite less than 10%. In an eclogitic lithology, bridgmanite and calcium perovskite comprise about 30% each. Magnesium silicate perovskite is probably the most abundant mineral phase in the Earth.
Presence in diamonds
Calcium silicate perovskite has been identified at Earth's surface as inclusions in diamonds. The diamonds are formed under high pressure deep in the mantle. With the great mechanical strength of the diamonds a large part of this pressure is retained inside the lattice, enabling inclusions such as the calcium silicate to be preserved in high-pressure form.
Deformation
Experimental deformation of polycrystalline under the conditions of the uppermost part of the lower mantle suggests that silicate perovskite deforms by a dislocation creep mechanism. This may help explain the observed seismic anisotropy in the mantle.
See also
Ringwoodite
References
External links
Petrology
Silicate minerals
Perovskites
High pressure science
Earth's mantle | Silicate perovskite | [
"Physics"
] | 913 | [
"High pressure science",
"Applied and interdisciplinary physics"
] |
36,030,443 | https://en.wikipedia.org/wiki/Outline%20of%20nuclear%20power | The following outline is provided as an overview of and topical guide to nuclear power:
Nuclear power – the use of sustained nuclear fission to generate heat and electricity. Nuclear power plants provide about 6% of the world's energy and 13–14% of the world's electricity, with the U.S., France, and Japan together accounting for about 50% of nuclear generated electricity.
What type of thing is nuclear power?
Nuclear power can be described as all of the following:
Nuclear technology (outline) – technology that involves the reactions of atomic nuclei. Among the notable nuclear technologies are nuclear power, nuclear medicine, and nuclear weapons. It has found applications from smoke detectors to nuclear reactors, and from gun sights to nuclear weapons.
Electricity generation – the process of generating electric energy from other forms of energy. The fundamental principles of electricity generation were discovered during the 1820s and early 1830s by the British scientist Michael Faraday. His basic method is still used today: electricity is generated by the movement of a loop of wire, or disc of copper between the poles of a magnet.
Science of nuclear power
Nuclear engineering
Nuclear chemistry
Nuclear fission
Nuclear physics
Atomic nucleus
Ionizing radiation
Nuclear fission
Radiation
Radioactivity
Radioisotope thermoelectric generator
Steam generator (nuclear power)
Nuclear material
Nuclear material
Nuclear fuel
Fertile material
Thorium
Uranium
Enriched uranium
Depleted uranium
Plutonium
Deuterium
Tritium
Nuclear reactor technology
Nuclear reactor technology
Types of nuclear reactors
Advanced gas-cooled reactor
Boiling water reactor
Fast breeder reactor
Fast neutron reactor
Gas-cooled fast reactor
Generation IV reactor
Integral Fast Reactor
Lead-cooled fast reactor
Liquid-metal-cooled reactor
Magnox reactor
Molten salt reactor
Pebble bed reactor
Pressurized water reactor
Sodium-cooled fast reactor
Supercritical water reactor
Very high temperature reactor
Dangers of nuclear power
Lists of nuclear disasters and radioactive incidents
Nuclear reactor accidents in the United States
Radioactive waste
Nuclear proliferation
Nuclear terrorism
Radioactive contamination
Notable accidents
2011 Japanese nuclear accidents
1986 List of Chernobyl-related articles
1985 Soviet submarine K-431
1979 Three Mile Island accident
1968 Soviet submarine K-27
1961 Soviet submarine K-19
History of nuclear power
History of nuclear power
Atomic Energy Commission (disambiguation)
History of uranium
Lists of nuclear disasters and radioactive incidents
United Nations Atomic Energy Commission (1946-1948)
United States Atomic Energy Commission (1946-1974)
Nuclear renaissance
Nuclear power industry
Environmental impact of nuclear power
Nuclear renaissance
Relative cost of electricity generated by different sources
Uranium mining
Uranium mining debate
Nuclear power plant
Uranium processing
Isotope separation
Enriched uranium
Nuclear reprocessing
Reprocessed uranium
Nuclear power plants
Economics of new nuclear power plants
Nuclear power plant emergency response team
List of nuclear reactors
Reactor building
Specific nuclear power plants
List of nuclear power stations
List of cancelled nuclear plants in the United States
Baltic nuclear power plant (disambiguation)
Belarusian nuclear power plant project
Berkeley nuclear power station
Bradwell nuclear power station
Chapelcross nuclear power station
Dodewaard nuclear power plant
Heysham nuclear power station
Hinkley Point A nuclear power station
Hinkley Point C nuclear power station
Hunterston A nuclear power station
Hunterston B nuclear power station
Russian floating nuclear power station
Sizewell nuclear power stations
Trawsfynydd nuclear power station
Nuclear waste
High-level radioactive waste management
List of nuclear waste treatment technologies
Nuclear power by region
Nuclear power by country
List of nuclear power accidents by country
Nuclear power in Asia
Nuclear power in India
India's three stage nuclear power programme
Nuclear power in Indonesia
Nuclear power in Japan
Nuclear power in North Korea
Nuclear power in Pakistan
Nuclear power in South Korea
Nuclear power in Taiwan
Nuclear power in Thailand
Nuclear power in the People's Republic of China
Nuclear power in the Philippines
Nuclear power in the United Arab Emirates
Nuclear power in Australia
Nuclear power in Europe
Nuclear power in the European Union
Nuclear power in Albania
Nuclear power in Belarus
Nuclear power in Bulgaria
Nuclear power in the Czech Republic
Nuclear power in Finland
Nuclear power in France
Nuclear power in Germany
Nuclear power in Italy
Nuclear power in Romania
Nuclear power in Russia
Nuclear power in Scotland
Nuclear power in Spain
Nuclear power in Sweden
Nuclear power in Switzerland
Nuclear power in Ukraine
Nuclear power in the United Kingdom
Nuclear power in North America
Nuclear power in Canada
Nuclear power in the United States
Nuclear power plants in New Jersey
Nuclear power companies
Companies in the nuclear sector – list of all large companies which are active along the nuclear chain, from uranium mining, processing and enrichment, to the actual operating of nuclear power plant and waste processing.
BKW FMB Energie AG
ČEZ Group
China Guangdong Nuclear Power Group
China National Nuclear Corporation
China Nuclear International Uranium Corporation
E.ON
E.ON Kernkraft GmbH
E.ON Sverige
Electrabel
Électricité de France
Eletronuclear
Endesa (Spain)
Energoatom
Fennovoima
Fortum
Iberdrola
Korea Hydro & Nuclear Power
Bhavini
Nuclear Power Corporation of India
Nuclearelectrica
OKB Gidropress
Resun
Rosenergoatom
RWE
Unión Fenosa
Teollisuuden Voima
Vattenfall
Vattenfall Europe Nuclear Energy GmbH
Nuclear safety
Nuclear safety
Event tree
Event tree analysis
Exclusion area
International Nuclear Safety Center
Nuclear power plant emergency response team
Reactor protection system
Nuclear safety in the United States
Nuclear power in space
Nuclear power in space
Advanced Stirling Radioisotope Generator
Politics of nuclear power
Alsos Digital Library for Nuclear Issues
Anti-nuclear movement
Anti-nuclear movement in Germany
Anti-nuclear movement in the United States
Anti-nuclear power movement in Japan
Anti-nuclear protests
Anti-nuclear protests in the United States
Nuclear energy policy
Nuclear power debate
Nuclear power phase-out
Nuclear power proposed as renewable energy
Nuclear whistleblowers
Nuclear renaissance
Uranium mining debate
Politics of nuclear power by region
1978 Austrian nuclear power referendum
2008 Lithuanian nuclear power referendum
1980 Swedish nuclear power referendum
Nuclear regulatory agencies
Association Nationale des Comités et Commissions Locales d'Information (France)
Atomic Energy Regulatory Board (India)
Autorité de sûreté nucléaire (France)
Bangladesh Atomic Energy Commission
Brazilian–Argentine Agency for Accounting and Control of Nuclear Materials
Canadian Nuclear Safety Commission
International Nuclear Regulators' Association
Japanese Atomic Energy Commission
Japanese Nuclear Safety Commission
Nuclear and Industrial Safety Agency (Japan, retired)
Nuclear Regulation Authority (Japan)
Kernfysische dienst (The Netherlands)
Nuclear Regulatory Commission (USA)
Pakistan Nuclear Regulatory Authority
Säteilyturvakeskus (Finland)
Nuclear power organizations
See also Nuclear regulatory agencies, above
Alsos Digital Library for Nuclear Issues
International Nuclear Safety Center
Against
Friends of the Earth International, a network of environmental organizations in 77 countries.
Greenpeace International, a non-governmental environmental organization with offices in 41 countries.
Nuclear Information and Resource Service (International)
World Information Service on Energy (International)
Sortir du nucléaire (France)
Pembina Institute (Canada)
Institute for Energy and Environmental Research (United States)
Sayonara Nuclear Power Plants (Japan)
Supportive
Nuclear power groups
World Nuclear Association, a confederation of companies connected with nuclear power production. (International)
International Atomic Energy Agency (IAEA)
Nuclear Energy Institute (United States)
American Nuclear Society (United States)
United Kingdom Atomic Energy Authority (United Kingdom)
EURATOM (Europe)
Atomic Energy of Canada Limited (Canada)
Environmentalists for Nuclear Energy (International)
Nuclear power publications
Nuclear Power and the Environment
Reaction Time: Climate Change and the Nuclear Option
World Nuclear Industry Status Report
In Mortal Hands
Persons influential in nuclear power
Scientists
Enrico Fermi – an American physicist
James Chadwick
Politicians
Harry Truman
Ed Markey
Naoto Kan
Nobuto Hosaka
Angela Merkel
Engineers
David Lochbaum
Arnold Gundersen
George Galatis
See also
Fusion power
Future energy development
German nuclear energy project
Inertial fusion power plant
Linear no-threshold model
Polywell
World energy resources and consumption
References
External links
Nuclear Energy Institute – Beneficial Uses of Radiation
Nuclear Technology
Reactor Power Plant Technology Education – Includes the PC-based BWR reactor simulation.
Alsos Digital Library for Nuclear Issues – Annotated Bibliography on Nuclear Power
An entry to nuclear power through an educational discussion of reactors
Argonne National Laboratory – Maps of Nuclear Power Reactors
Briefing Papers from the Australian EnergyScience Coalition
British Energy – Understanding Nuclear Energy / Nuclear Power
Coal Combustion: Nuclear Resource or Danger?
Energy Information Administration provides lots of statistics and information
How Nuclear Power Works
IAEA Website The International Atomic Energy Agency
IAEA's Power Reactor Information System (PRIS)
Nuclear Power: Climate Fix or Folly? (2009)
Nuclear Power Education
Nuclear Tourist.com, nuclear power information
Nuclear Waste Disposal Resources
The World Nuclear Industry Status Reports website
Wilson Quarterly – Nuclear Power: Both Sides
TED Talk – Bill Gates on energy: Innovating to zero!
LFTR in 5 Minutes – Creative Commons Film Compares PWR to Th-MSR/LFTR Nuclear Power.
!
Nuclear power
Nuclear power | Outline of nuclear power | [
"Physics"
] | 1,750 | [
"Power (physics)",
"Physical quantities",
"Nuclear power"
] |
36,035,103 | https://en.wikipedia.org/wiki/Zinc%20finger%20transcription%20factor | Zinc finger transcription factors or ZF-TFs, are transcription factors composed of a zinc finger-binding domain and any of a variety of transcription-factor effector-domains that exert their modulatory effect in the vicinity of any sequence to which the protein domain binds.
Zinc finger protein transcription factors can be encoded by genes small enough to fit a number of such genes into a single vector, allowing the medical intervention and control of expression of multiple genes and the initiation of an elaborate cascade of events. In this respect, it is also possible to target a sequence that is common to multiple (usually functionally related) genes to control the transcription of all these genes with a single transcription factor. Also, it is possible to target a family of related genes by targeting and modulating the expression of the endogenous transcription factor(s) that control(s) them. They also have the advantage that the targeted sequence need not be symmetrical unlike most other DNA-binding motifs based on natural transcription factors that bind as dimers.
Applications
By targeting the ZF-TF toward a specific DNA sequence and attaching the necessary effector domain, it is possible to downregulate or upregulate the expression of the gene(s) in question while using the same DNA-binding domain. The expression of a gene can also be downregulated by blocking elongation by RNA polymerase (without the need for an effector domain) in the coding region or RNA itself can also be targeted. Besides the obvious development of tools for the research of gene function, engineered ZF-TFs have therapeutic potential including correction of abnormal gene expression profiles (e.g., erbB-2 overexpression in human adenocarcinomas) and anti-retrovirals (e.g. HIV-1).
See also
Artificial transcription factor, of which the ZF-TF is a type
Gene therapy
Zinc finger proteins
Zinc finger chimera
Zinc finger nuclease
References
Transcription factors
Zinc proteins | Zinc finger transcription factor | [
"Chemistry",
"Biology"
] | 410 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
44,596,487 | https://en.wikipedia.org/wiki/Approach%20and%20departure%20angles | Approach angle is the maximum angle of a ramp onto which a vehicle can climb from a horizontal plane without interference. It is defined as the angle between the ground and the line drawn between the front tire and the lowest-hanging part of the vehicle at the front overhang. Departure angle is its counterpart at the rear of the vehicle – the maximum ramp angle from which the car can descend without damage. Approach and departure angles are also referred to as ramp angles.
Approach and departure angles are indicators of off-road ability of the vehicle: they indicate how steep of obstacles, such as rocks or logs, the vehicle can negotiate according to its body shape alone.
See also
Breakover angle
Overhang (automotive)
Ride height
References
External links
Approach and Departure Angles at Why High End?
Automotive engineering
Angle | Approach and departure angles | [
"Physics",
"Engineering"
] | 159 | [
"Geometric measurement",
"Scalar physical quantities",
"Physical quantities",
"Automotive engineering",
"Mechanical engineering by discipline",
"Wikipedia categories named after physical quantities",
"Angle"
] |
44,600,123 | https://en.wikipedia.org/wiki/Sclerotinia%20bulborum | Sclerotinia bulborum is a plant pathogen infecting the bulbs of plants, causing black slime disease. It affects a number of ornamental bulbous plants including Iris, Hyacinth, Muscari and Narcissus.
References
Bibliography
External links
Index Fungorum
USDA ARS Fungal Database
Fungi of the United States
Mycobank
Fungal plant pathogens and diseases
Sclerotiniaceae
Fungus species | Sclerotinia bulborum | [
"Biology"
] | 86 | [
"Fungi",
"Fungus species"
] |
44,600,975 | https://en.wikipedia.org/wiki/Equatorial%20plasma%20bubble | Equatorial plasma bubbles are an ionospheric phenomenon near the Earth's geomagnetic equator at night time. They affect radio waves by causing varying delays. They degrade the performance of GPS.
Different times of the year and locations have different frequencies of occurrence. In Northern Australia, the most common times are February to April and August to October, when a plasma bubble is expected every night. Plasma bubbles have dimensions around 100 km. Plasma bubbles form after dark when the sun stops ionising the ionosphere. The ions recombine, forming a lower-density layer. This layer can rise through the more ionized layers above via convection, which makes a plasma bubble. The bubbles are turbulent with irregular edges.
An equatorial plasma bubble could have affected the Battle of Shah-i-Kot by disabling communications from a communications satellite to a helicopter.
On August 27, 2024, China's Powerful LARID RADAR, which China has developed for its military purposes, with which it can detect military satellites and nearby enemy presence if any, detected Plasma Bubble Over Egyptian Pyramids.
References
Ionosphere | Equatorial plasma bubble | [
"Physics",
"Astronomy"
] | 224 | [
"Plasma physics",
"Astronomy stubs",
"Astrophysics",
"Astrophysics stubs",
"Plasma physics stubs"
] |
44,605,563 | https://en.wikipedia.org/wiki/Ventilatory%20threshold | In kinesiology, the ventilatory threshold (VT1) refers to the point during exercise at which the volume of air breathed out (expiratory ventilation) starts to increase at an exponentially greater rate than VO2 (breath-by-breath volume of oxygen (O2)). VT1 is thought to reflect a person's anaerobic threshold — the point at which the oxygen supplied to the muscles no longer meets its oxygen requirements at a given work rate — and therefore lactate threshold — the point at which lactate begins to accumulate in the blood, because with ongoing dependence on anaerobic glycolysis, increasing amounts of CO2 need to be exhaled to accommodate its production during the conversion of lactic acid to lactate.
As the intensity level of the activity being performed increases, breathing becomes faster; more steadily first and then more rapid as the intensity increases. When breathing surpasses normal ventilation rate, one has reached ventilatory threshold. For most people this threshold lies at exercise intensities between 50% and 75% of VO2 max. A major factor affecting one's ventilatory threshold is their maximal ventilation (amount of air entering and exiting lungs). This is dependent on their personal experience with the activity and how physically fit the person is. Comparison studies of more athletic people have shown that your ventilatory threshold occurs at a higher intensity if you are more active or have been training for that exercise; although, in some cases shorter continuous tests can be used because of rapid alterations in ventilation.
Methods
Ventilation Curve – Plot VE vs. VO2 or Watts or Time – The point at which there is a non‐linear increase in ventilation
V‐Slope Method – Plot VO2 vs. VCO2 – The point at which the increase in VCO2 is greater than the increase in VO2
Ventilatory Equivalents Method – Plot VE/VO2 and VE/VCO2 vs. Watts or time or VO2 – Point at which VE/VO2 increases while VE/VCO2 decreases or stays the same.
Sample values
Frangolias DD,
Rhodes EC
School of Human Kinetics, University of British Columbia, Vancouver, Canada.
Medicine and Science in Sports and Exercise [1995, 27(7):1007-1013]:
A government experiment to test ventilatory threshold was held between November and December 2004. Subjects included 32 physically active males (age: 22.3; TV: 180.5; TM: 75.5 kg; VO2max: 57.1 mL/kg/min) encountered a continuous test of increasing loads on a treadmill, cardiorespiratory and other variables were observed using ECG (recording of the electrical activity of the heart) and gas analyzer. During the test, subjects were asked to point at a scale from 6 to 20 reflecting their feeling of discomfort. The RPE threshold was recorded as constant value of 12-13. Averages of ventilatory and RPE threshold were conveyed by parameters that were monitored and then compared by using t-test for dependent samples. No significant difference was found between mean values of ventilatory and RPE threshold, when they were expressed by parameters such as: speed, load, heart rate, absolute and relative oxygen consumption. The conclusion of this experiment was: the fixed value (12-13) of RPE scale may be used to detect the exercise intensity that corresponds to ventilatory threshold.
VO2 max levels
Maximum oxygen intake, VO2, is one of the best measures of cardiovascular fitness and maximal aerobic power. VO2 max averages around 35–40 mL/(kg∙ min) in a healthy male and 27–31 mL/ (kg∙ min) in a healthy female. These scores can improve with training. Factors that affect your VO2 max are age, sex, fitness, training, and genetics. While scores in the upper 80s and 90s have been recorded by legendary endurance athletes such as Greg Lemond, Miguel Indurain, and Steve Prefontaine, most competitive endurance athletes have scores in the mid to high 60s. Cycling, rowing, swimming and running are some of the main sports that push VO2 levels to the maximum. Ventilatory threshold and lactate threshold are expressed as a percentage of VO2 max; beyond this percentage the ability to sustain the work rate rapidly declines as high intensity but short duration energy systems such as glycolysis and ATP-PC are relied on more heavily.
See also
Anaerobic exercise
Lactate threshold
VO2 max
References
"Determination of Ventilatory Threshold Based on Subjective Rating of Perceived Exertion." National Center for Biotechnology Information. U.S. National Library of Medicine, n.d. Web. 03 Nov. 2014.
Hoffman, Shirl J. Introduction to Kinesiology: Studying Physical Activity. Champaign, IL: Human Kinetics, 2005
Cheatham, Dr. "Topic 3: Determination of the Lactate and Ventilatory Thresholds Topic 3: Determination of the Lactate and Ventilatory Thresholds. Review of Physiology, Methods of Detection, and Application." Web. 30 Aug, 2013.
"Optimize Endurance Training." Optimize Endurance Training. N.p., n.d. Web. 02 Nov. 2014.
"Changes in ventilatory threshold with exercise training in a sedentary population: the HERITAGE Family Study" National Center for Biotechnology Information. U.S. National Library of Medicine, n.d. Web. 03 Nov. 2014.
"Result Filters." National Center for Biotechnology Information. U.S. National Library of Medicine, n.d. Web. 03 Nov. 2014.
"Ventilatory Threshold." TheFreeDictionary.com. N.p., n.d. Web. 03 Nov. 2014.
Fitzgerald, Jason. “ VO2 Max Testing and Ventilatory Threshold: Endurance Testing for Runners.” Strength Running. 25, July 2010. Web. 8, August 2010.
"VO2 Max, Aerobic Power & Maximal Oxygen Uptake." Sports Fitness Advisor.
Exercise biochemistry
Exercise physiology
Respiratory physiology
Sports medicine | Ventilatory threshold | [
"Chemistry",
"Biology"
] | 1,251 | [
"Biochemistry",
"Exercise biochemistry"
] |
44,610,362 | https://en.wikipedia.org/wiki/Chan%E2%80%93Lam%20coupling | The Chan–Lam coupling reaction, also known as the Chan–Evans–Lam coupling, is a cross-coupling reaction between an aryl boronic acid and an alcohol or an amine to form the corresponding secondary aryl amines or aryl ethers, respectively. The Chan–Lam coupling is catalyzed by copper complexes. It can be conducted in air at room temperature. The more popular Buchwald–Hartwig coupling relies on the use of palladium.
History
Dominic Chan, David Evans, and Patrick Lam published their work nearly simultaneously. The mechanism however remained uncertain for many years. Later developments by others extended the scope to include using carboxylic acids, giving aryl-ester products.
Mechanism
Analysis of the mechanism is complicated by the lability of copper reagents and the multicomponent nature of the reaction. The reaction proceeds via the formation of copper-aryl complexes. A copper(III)-aryl-alkoxide or copper(III)-aryl-amide intermediate undergoes Reductive elimination to give the aryl ether or aryl amine, respectively:
Ar-Cu(III)-NHR-L2 → Ar-NHR + Cu(I)L2
Ar-Cu(III)-OR-L2 → Ar-OR + Cu(I)L2
Example
An example of the Chan–Lam coupling to synthesize biologically active compounds is shown below:
Compound 1, a pyrrole, is coupled with aryl boronic acid, 2, to afford product 3, which is then carried forward to the target 4. The nitrile group of 2 does not poison the catalyst. Pyridine is the ligand used for the reaction. Although the reaction requires three days, it was carried out at room temperature in ambient air and resulted in a 93% yield.
Further reading
References
Carbon-carbon bond forming reactions
Name reactions | Chan–Lam coupling | [
"Chemistry"
] | 396 | [
"Coupling reactions",
"Name reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
23,276,709 | https://en.wikipedia.org/wiki/Reinforced%20rubber | Reinforced rubber products are one of the largest groups of composite materials, though rarely referred to as composite materials. Familiar examples are automobile tyres, hoses, and conveyor belts.
Composite reinforced structure
Reinforced rubber products combine a rubber matrix and a reinforcing material so that high strength to flexibility ratios can be achieved. The reinforcing material, usually a kind of fibre, provides the strength and stiffness. The rubber matrix, with low strength and stiffness, provides air-fluid tightness and supports the reinforcing materials to maintain their relative positions. These positions are of great importance because they influence the resulting mechanical properties.
A composite structure in which all fibres are loaded equally everywhere when pressurized, is called an isotropic structure, and the type of loading is named an isotensoidal loading. To meet the isotensoidal concept the structure geometry must have an isotensoid meridian profile and the fibres must be positioned following geodesic paths. A geodesic path connects two arbitrary points on a continuous surface by means of the shortest possible way.
Straight rubber hoses
To achieve optimal loading in a straight rubber hose the fibres must be positioned under an angle of approximately 54.7 angular degrees, also referred to as the magic angle. The magic angle of 54.7 exactly balances the internal-pressure-induced longitudinal stress and the hoop (circumferential) stress, as observed in most biological pressurized fiber-wound cylinders, like arteries. If the fiber angle is initially above or below 54.7, it will change under increased internal pressure until it rises to the magic angle where hoop stresses and longitudinal stresses equalize, with concomitant accommodations in hose diameter and hose length. A hose with an initially low fiber angle will rise under pressure to 54.7, inducing a hose diameter increase and a length decrease, whereas a hose with an initially high fiber angle will drop to 54.7, inducing a hose diameter decrease and a length increase. The equilibrium state is a fiber angle of 54.7. In this situation, the fibres tend to be loaded purely in tension, so ~100% of their strength resists the forces acting on the hose due to the internal pressure. (The magic angle for cylindrical shapes of 54.7 angular degrees is based on calculations in which the influence of the matrix material is neglected. Therefore, depending on the stiffness of the rubber material used, the actual equilibrium angle can vary a few tenths of degrees from the magic angle.)
When the fibres of the reinforcement structure are placed under angles larger than 54.7 angular degrees, the fibres want to relocate to their optimal path when pressurized. This means that the fibres will re-orient themselves until they have reached their force equilibrium. In this case this will lead to an increase in length and a decrease in diameter. With angles smaller than 54.7 degrees the opposite will occur. A product which makes use of this principle is a pneumatic muscle.
Reinforcement of complex shaped rubber products
For a cylinder with a constant diameter, the reinforcement angle is constant as well and is 54.7º. This also known as the magic angle or neutral angle. The neutral angle is the angle where a wound structure is in equilibrium. For a cylinder, this is 54.7º, but for a more complex shape like a bellows which has a varying radius over the length of the product, this neutral angle is different for each radius. In other words, for complex shapes there is not one magic angle but the fibres follow a geodesic path with angles varying with the change in radius. To obtain a reinforcement structure with isotensoidal loading the geometry of the complex shape must follow an isotensoid meridian profile.
Reinforcement application technology
The mil fabric reinforcement can be applied on the rubber products with different processes. For straight hoses, the most used processes are braiding, spiralling, knitting, and wrapping. The first three processes have in common that multiple strands of fibres are applied to the product simultaneously on a predetermined pattern in an automated process. The fourth process comprises manual or semi-automated wrapping of rubber sheets reinforced with fabric plies. For the reinforcement of complex shaped rubber products like bellows most manufacturers use these fabric reinforced rubber sheets. These sheets are made by calendering of rubber onto pre-woven fabric plies. The products are manufactured by wrapping (mostly manually) these sheets around a mandrel until enough rubber and reinforcement is applied. However, the disadvantage of using these sheets is that it is impossible to control the positioning of the individual fibres of the fabric when applied on complex shapes. Therefore, no geodesic paths can be achieved and therefore also no isotensoid loading is possible. To obtain isotensoide loading on a complex shape, the shape must have an isotensoideal profile and geodesic positioning of the fibre structure is required. This can be achieved by using automated winding processes like filament winding or spiralling.
References
Composite materials | Reinforced rubber | [
"Physics"
] | 1,020 | [
"Materials",
"Composite materials",
"Matter"
] |
23,280,456 | https://en.wikipedia.org/wiki/Lamination%20%28topology%29 | In topology, a branch of mathematics, a lamination is a :
"topological space partitioned into subsets"
decoration (a structure or property at a point) of a manifold in which some subset of the manifold is partitioned into sheets of some lower dimension, and the sheets are locally parallel.
A lamination of a surface is a partition of a closed subset of the surface into smooth curves.
It may or may not be possible to fill the gaps in a lamination to make a foliation.
Examples
A geodesic lamination of a 2-dimensional hyperbolic manifold is a closed subset together with a foliation of this closed subset by geodesics. These are used in Thurston's classification of elements of the mapping class group and in his theory of earthquake maps.
Quadratic laminations, which remain invariant under the angle doubling map. These laminations are associated with quadratic maps. It is a closed collection of chords in the unit disc. It is also topological model of Mandelbrot or Julia set.
See also
Train track (mathematics)
Orbit portrait
Notes
References
Conformal Laminations Thesis by Vineet Gupta, California Institute of Technology Pasadena, California 2004
Topology
Manifolds | Lamination (topology) | [
"Physics",
"Mathematics"
] | 245 | [
"Space (mathematics)",
"Topology stubs",
"Topological spaces",
"Topology",
"Space",
"Manifolds",
"Geometry",
"Spacetime"
] |
23,280,931 | https://en.wikipedia.org/wiki/Ladder%20operator | In linear algebra (and its application to quantum mechanics), a raising or lowering operator (collectively known as ladder operators) is an operator that increases or decreases the eigenvalue of another operator. In quantum mechanics, the raising operator is sometimes called the creation operator, and the lowering operator the annihilation operator. Well-known applications of ladder operators in quantum mechanics are in the formalisms of the quantum harmonic oscillator and angular momentum.
Terminology
There is a relationship between the raising and lowering ladder operators and the creation and annihilation operators commonly used in quantum field theory which lies in representation theory. The creation operator ai† increments the number of particles in state i, while the corresponding annihilation operator ai decrements the number of particles in state i. This clearly satisfies the requirements of the above definition of a ladder operator: the incrementing or decrementing of the eigenvalue of another operator (in this case the particle number operator).
Confusion arises because the term ladder operator is typically used to describe an operator that acts to increment or decrement a quantum number describing the state of a system. To change the state of a particle with the creation/annihilation operators of QFT requires the use of both annihilation and creation operators. An annihilation operator is used to remove a particle from the initial state and a creation operator is used to add a particle to the final state.
The term "ladder operator" or "raising and lowering operators" is also sometimes used in mathematics, in the context of the theory of Lie algebras and in particular the affine Lie algebras. For example to describe the su(2) subalgebras, the root system and the highest weight modules can be constructed by means of the ladder operators. In particular, the highest weight is annihilated by the raising operators; the rest of the positive root space is obtained by repeatedly applying the lowering operators (one set of ladder operators per subalgebra).
Motivation from mathematics
From a representation theory standpoint a linear representation of a semi-simple Lie group in continuous real parameters induces a set of generators for the Lie algebra. A complex linear combination of those are the ladder operators.
For each parameter there is a set of ladder operators; these are then a standardized way to navigate one dimension of the root system and root lattice. The ladder operators of the quantum harmonic oscillator or the "number representation" of second quantization are just special cases of this fact. Ladder operators then become ubiquitous in quantum mechanics from the angular momentum operator, to coherent states and to discrete magnetic translation operators.
General formulation
Suppose that two operators X and N have the commutation relation
for some scalar c. If is an eigenstate of N with eigenvalue equation
then the operator X acts on in such a way as to shift the eigenvalue by c:
In other words, if is an eigenstate of N with eigenvalue n, then is an eigenstate of N with eigenvalue n + c or is zero. The operator X is a raising operator for N if c is real and positive, and a lowering operator for N if c is real and negative.
If N is a Hermitian operator, then c must be real, and the Hermitian adjoint of X obeys the commutation relation
In particular, if X is a lowering operator for N, then X† is a raising operator for N and conversely.
Angular momentum
A particular application of the ladder operator concept is found in the quantum-mechanical treatment of angular momentum. For a general angular momentum vector J with components Jx, Jy and Jz one defines the two ladder operators
where i is the imaginary unit.
The commutation relation between the cartesian components of any angular momentum operator is given by
where εijk is the Levi-Civita symbol, and each of i, j and k can take any of the values x, y and z.
From this, the commutation relations among the ladder operators and Jz are obtained:
(technically, this is the Lie algebra of ).
The properties of the ladder operators can be determined by observing how they modify the action of the Jz operator on a given state:
Compare this result with
Thus, one concludes that is some scalar multiplied by :
This illustrates the defining feature of ladder operators in quantum mechanics: the incrementing (or decrementing) of a quantum number, thus mapping one quantum state onto another. This is the reason that they are often known as raising and lowering operators.
To obtain the values of α and β, first take the norm of each operator, recognizing that J+ and J− are a Hermitian conjugate pair ():
The product of the ladder operators can be expressed in terms of the commuting pair J2 and Jz:
Thus, one may express the values of |α|2 and |β|2 in terms of the eigenvalues of J2 and Jz:
The phases of α and β are not physically significant, thus they can be chosen to be positive and real (Condon–Shortley phase convention). We then have
Confirming that m is bounded by the value of j (), one has
The above demonstration is effectively the construction of the Clebsch–Gordan coefficients.
Applications in atomic and molecular physics
Many terms in the Hamiltonians of atomic or molecular systems involve the scalar product of angular momentum operators. An example is the magnetic dipole term in the hyperfine Hamiltonian:
where I is the nuclear spin.
The angular momentum algebra can often be simplified by recasting it in the spherical basis. Using the notation of spherical tensor operators, the "−1", "0" and "+1" components of J(1) ≡ J are given by
From these definitions, it can be shown that the above scalar product can be expanded as
The significance of this expansion is that it clearly indicates which states are coupled by this term in the Hamiltonian, that is those with quantum numbers differing by mi = ±1 and mj = ∓1 only.
Harmonic oscillator
Another application of the ladder operator concept is found in the quantum-mechanical treatment of the harmonic oscillator. We can define the lowering and raising operators as
They provide a convenient means to extract energy eigenvalues without directly solving the system's differential equation.
Hydrogen-like atom
There are two main approaches given in the literature using ladder operators, one using the Laplace–Runge–Lenz vector, another using factorization of the Hamiltonian.
Laplace–Runge–Lenz vector
Another application of the ladder operator concept is found in the quantum mechanical treatment of the electronic energy of hydrogen-like atoms and ions. The Laplace–Runge–Lenz vector commutes with the Hamiltonian for an inverse square spherically symmetric potential and can be used to determine ladder operators for this potential.
We can define the lowering and raising operators (based on the classical Laplace–Runge–Lenz vector)
where is the angular momentum, is the linear momentum, is the reduced mass of the system, is the electronic charge, and is the atomic number of the nucleus.
Analogous to the angular momentum ladder operators, one has and .
The commutators needed to proceed are
and
Therefore,
and
so
where the "?" indicates a nascent quantum number which emerges from the discussion.
Given the Pauli equations IV:
and III:
and starting with the equation
and expanding, one obtains (assuming is the maximum value of the angular momentum quantum number consonant with all other conditions)
which leads to the Rydberg formula
implying that , where is the traditional quantum number.
Factorization of the Hamiltonian
The Hamiltonian for a hydrogen-like potential can be written in spherical coordinates as
where , and the radial momentum
which is real and self-conjugate.
Suppose is an eigenvector of the Hamiltonian, where is the angular momentum, and represents the energy, so , and we may label the Hamiltonian as :
The factorization method was developed by Infeld and Hull for differential equations. Newmarch and Golding applied it to spherically symmetric potentials using operator notation.
Suppose we can find a factorization of the Hamiltonian by operators as
and
for scalars and . The vector may be evaluated in two different ways as
which can be re-arranged as
showing that is an eigenstate of with eigenvalue
If , then , and the states and
have the same energy.
For the hydrogenic atom, setting
with
a suitable equation for is
with
There is an upper bound to the ladder operator if the energy is negative (so
for some ), then if follows from equation () that
and can be identified with
Relation to group theory
Whenever there is degeneracy in a system, there is usually a related symmetry property and group. The degeneracy of the energy levels for the same value of but different angular momenta has been identified as the SO(4) symmetry of the spherically symmetric Coulomb potential.
3D isotropic harmonic oscillator
The 3D isotropic harmonic oscillator has a potential given by
It can similarly be managed using the factorization method.
Factorization method
A suitable factorization is given by
with
and
Then
and continuing this,
Now the Hamiltonian only has positive energy levels as can be seen from
This means that for some value of the series must terminate with
and then
This is decreasing in energy by unless for some value of . Identifying this value as gives
It then follows the so that
giving a recursion relation on with solution
There is degeneracy caused from angular momentum; there is additional degeneracy caused by the oscillator potential.
Consider the states
and apply the lowering operators :
giving the sequence
with the same energy but with decreasing by 2.
In addition to the angular momentum degeneracy, this gives a total degeneracy of
Relation to group theory
The degeneracies of the 3D isotropic harmonic oscillator are related to the special unitary group SU(3)
History
Many sources credit Paul Dirac with the invention of ladder operators. Dirac's use of the ladder operators shows that the total angular momentum quantum number needs to be a non-negative half-integer multiple of .
See also
Creation and annihilation operators
Quantum harmonic oscillator
Chevalley basis
References
Quantum mechanics
de:Erzeugungs- und Vernichtungsoperator | Ladder operator | [
"Physics"
] | 2,158 | [
"Quantum operators",
"Quantum mechanics"
] |
23,281,338 | https://en.wikipedia.org/wiki/Organogallium%20chemistry | Organogallium chemistry is the chemistry of organometallic compounds containing a carbon to gallium (Ga) chemical bond. Despite their high toxicity , organogallium compounds have some use in organic synthesis. The compound trimethylgallium is of some relevance to MOCVD as a precursor to gallium arsenide via its reaction with arsine at 700 °C:
Ga(CH3)3 + AsH3 → GaAs + 3CH4
Gallium trichloride is an important reagent for the introduction of gallium into organic compounds.
The main gallium oxidation state is Ga(III), as in all lower group 13 elements (such as aluminium).
Organogallium(I) chemistry
Organometallic complexes of gallium(I) are significantly rarer than that of gallium(III). Some common species include arene-gallium(I) complexes and sterically hindered aryl gallium(I) complexes.
Organogallium(III) chemistry
Compounds of the type R3Ga are monomeric. Lewis acidity decreases in the order Al > Ga > In and as a result organogallium compounds do not form bridged dimers as organoaluminum compounds do. Organogallium compounds are also less reactive than organoaluminum compounds. They do form stable peroxides.
Organogallium compounds can be synthesized by transmetallation, for example the reaction of gallium metal with dimethylmercury:
2Ga + 3Me2Hg → 2Me3Ga + 3 Hg
or via organolithium compounds or Grignards:
GaCl3 + 3MeMgBr → Me3Ga + 3MgBrCl
The electron-deficient nature of gallium can be removed by complex formation, for example
Me2GaCl + NH3 → [Me2Ga(NH3)Cl]+Cl−
Pi complex formation with alkynes is also known.
Organogallium compounds are reagents or intermediates in several classes of organic reactions:
Barbier-type reactions with elemental gallium, allylic substrates and carbonyl compounds
Carbometallation (carbogallation) reactions
See also
Organoindium chemistry
Organothallium chemistry
References
Gallium compounds
Organometallic compounds | Organogallium chemistry | [
"Chemistry"
] | 475 | [
"Organic compounds",
"Organometallic compounds",
"Organometallic chemistry",
"Inorganic compounds"
] |
23,282,391 | https://en.wikipedia.org/wiki/Actinium-225 | Actinium-225 (225Ac, Ac-225) is an isotope of actinium. It undergoes alpha decay to francium-221 with a half-life of 10 days, and is an intermediate decay product in the neptunium series (the decay chain starting at 237Np). Except for minuscule quantities arising from this decay chain in nature, 225Ac is entirely synthetic.
The decay properties of actinium-225 are favorable for usage in targeted alpha therapy (TAT); clinical trials have demonstrated the applicability of radiopharmaceuticals containing 225Ac to treat various types of cancer. However, the scarcity of this isotope resulting from its necessary synthesis in cyclotrons limits its potential applications.
Decay and occurrence
Actinium-225 has a half-life of 10 days and decays by alpha emission. It is part of the neptunium series, for it arises as a decay product of neptunium-237 and its daughters such as uranium-233 and thorium-229. It is the last nuclide in the chain with a half-life over a day until the penultimate product, bismuth-209 (half-life years). The final decay product of 225Ac is stable 205Tl.
As a member of the neptunium series, it does not occur in nature except as a product of trace quantities of 237Np and its daughters formed by neutron capture reactions on primordial 232Th and 238U. It is much rarer than 227Ac and 228Ac, which respectively occur in the decay chains of uranium-235 and thorium-232. Its abundance was estimated as less than relative to 232Th and around relative to 230Th in secular equilibrium.
Discovery
Actinium-225 was discovered in 1947 as part of the hitherto unknown neptunium series, which was populated by the synthesis of 233U. A team of physicists from Argonne National Laboratory led by F. Hagemann initially reported the discovery of 225Ac and identified its 10-day half-life. Independently, a Canadian group led by A. C. English identified the same decay scheme; both papers were published in the same issue of Physical Review.
Production
As 225Ac does not occur in any appreciable quantities in nature, it must be synthesized in specialized nuclear reactors. The majority of 225Ac results from the alpha decay of 229Th, but this supply is limited because the decay of 229Th (half-life 7340 years) is relatively slow due to its relatively long half-life. It is also possible to breed 225Ac from radium-226 in the 226Ra(p,2n) reaction. The potential to populate 225Ac using a 226Ra target was first demonstrated in 2005, though the production and handling of 226Ra are difficult because of the respective cost of extraction and hazards of decay products such as radon-222.
Alternatively, 225Ac can be produced in spallation reactions on a 232Th target irradiated with high-energy proton beams. Current techniques enable the production of millicurie quantities of 225Ac; however, it must then be separated from other reaction products. This is done by allowing some of the shorter-lived nuclides to decay; actinium isotopes are then chemically purified in hot cells and 225Ac is concentrated. Special care must be taken to avoid contamination with the longer-lived beta-emitting actinium-227.
For decades, most 225Ac was produced in one facility—the Oak Ridge National Laboratory in Tennessee—further reducing this isotope's availability even with smaller contributions from other laboratories. Additional 225Ac is now produced from 232Th at Los Alamos National Laboratory and Brookhaven National Laboratory. The TRIUMF facility and Canadian Nuclear Laboratories have formed a strategic partnership around the commercial production of actinium-225.
The low supply of 225Ac limits its use in research and cancer treatment. It is estimated that the current supply of 225Ac only allows about a thousand cancer treatments per year.
Applications
Alpha emitters such as actinium-225 are favored in cancer treatment because of the short range (a few cell diameters) of alpha particles in tissue and their high energy, rendering them highly effective in targeting and killing cancer cells—specifically, alpha particles are more effective at breaking DNA strands. The 10-day half-life of 225Ac is long enough to facilitate treatment, but short enough that little remains in the body months after treatment. This contrasts with the similarly investigated 213Bi, whose 46-minute half-life necessitates in situ generation and immediate use. Additionally, 225Ac has a median lethal dose several orders of magnitude greater than 213Bi because of its longer half-life and subsequent alpha emissions from its decay products. Each decay of 225Ac to 209Bi nets four high-energy alpha particles, greatly increasing its potency.
Despite its limited availability, several clinical trials have been completed, demonstrating the effectiveness of 225Ac in targeted alpha therapy. Complexes including 225Ac—such as antibodies labeled with 225Ac—have been tested to target various types of cancer, including leukemia, prostate carcinoma, and breast carcinoma in humans. For example, one experimental 225Ac-based drug has shown effectiveness against acute myeloid leukemia without harming the patient. Further clinical trials of other drugs are underway.
See also
Isotopes of actinium
Radium-223
References
Actinium-225
Medical isotopes
Experimental cancer drugs | Actinium-225 | [
"Chemistry"
] | 1,112 | [
"Chemicals in medicine",
"Isotopes of actinium",
"Isotopes",
"Medical isotopes"
] |
23,284,390 | https://en.wikipedia.org/wiki/Indium-111 | Indium-111 (111In) is a radioactive isotope of indium (In). It decays by electron capture to stable cadmium-111 with a half-life of 2.8 days.
Indium-111 chloride (111InCl) solution is produced by proton irradiation of a cadmium target (112Cd(p,2n) or 111Cd(p,n)) in a cyclotron, as recommended by International Atomic Energy Agency (IAEA). The former method is more commonly used as it results in a high level of radionuclide purity.
Indium-111 is commonly used in nuclear medicine diagnostic imaging by radiolabeling targeted molecules or cells. During its radioactive decay, it emits low energy gamma (γ) photons which can be imaged using planar or single-photon emission computed tomography (SPECT) gamma cameras (primary energies (ε) of 171.3 keV (91%) and 245.4 keV (94%))
Uses in nuclear medicine
When formulated as an 111InCl solution, it can be used to bind antibodies, peptides, or other molecular targeted proteins or other molecules, typically using a chelate to bind the radionuclide (in this case 111In) to the targeting molecule during the radiosynthesis/ radiolabeling process, which is tailored to the desired product.
111In labeled antibodies
Ibritumomab Tiuxetan; Zevalin - For dosimetry estimates prior to 90Y immunotherapy for lymphoma
111In ProstaScint — PSMA antibody imaging of prostate cancer
111In labeled peptides
111In pentetreotide (including in 111In (diethylenetriaminopentaacetic (DTPA)-octreotide) and Octreoscan)
Octreotide is an somatostatin receptor inhibitor pharmaceutical which binds with high affinity to somatostatin receptors 2 and 5, interfering with normal receptor function. It is used as a drug to treat several neuroendocrine tumors in which somatostatin receptors are overexpressed or overactive. Examples include:
Sympathoadrenal system tumors: pheochromocytoma, neuroblastoma, ganglioneuroma, paraganglioma
Gastroenteropancreatic (GEP) tumors: carcinoid, insulinoma
Medullary thyroid cancer, pituitary adenoma, small cell lung cancer
111In pentetreotide imaging can identify the presence, levels of somatostatin receptor 2,5 expression, extent of disease and response to therapy
111In can also be formulated in the chemical form 111In oxyquinoline (oxine) for labeling blood cells and components
Platelets for thrombus detection
Leukocytes for localization of inflammation and abscesses, detect and monitor osteomyelitis, and detect mycotic aneurysms, vascular graft and shunt infections and determination of leukocyte kinetics;
See also
Isotopes of indium
Indium white blood cell scan
References
Indium-111
Medical isotopes | Indium-111 | [
"Chemistry"
] | 649 | [
"Chemicals in medicine",
"Isotopes of indium",
"Isotopes",
"Medical isotopes"
] |
23,285,206 | https://en.wikipedia.org/wiki/Rubidium-82 | Rubidium-82 (82Rb) is a radioactive isotope of rubidium. 82Rb is widely used in myocardial perfusion imaging. This isotope undergoes rapid uptake by myocardiocytes, which makes it a valuable tool for identifying myocardial ischemia in Positron Emission Tomography (PET) imaging. 82Rb is used in the pharmaceutical industry and is marketed as Rubidium-82 chloride under the trade names RUBY-FILL and CardioGen-82.
History
In 1953, it was discovered that rubidium carried a biological activity that was comparable to potassium. In 1959, preclinical trials showed in dogs that myocardial uptake of this radionuclide was directly proportional to myocardial blood flow. In 1979, Yano et al. compared several ion-exchange columns to be used in an automated 82Sr/82Rb generator for clinical testing. Around 1980, pre-clinical trials began using 82Rb in PET. In 1982, Selwyn et al. examined the relation between myocardial perfusion and rubidium-82 uptake during acute ischemia in six dogs after coronary stenosis and in five volunteers and five patients with coronary artery disease. Myocardial tomograms, recorded at rest and after exercise in the volunteers showed homogeneous uptake in reproducible and repeatable scans. Rubidium-82 has shown considerable accuracy, comparable to that of 99mTc-SPECT. In 1989, the FDA approved the 82Rb/82Sr generator for commercial use in the U.S. With increased 82Sr production capabilities, the use of 82Rb has increased over the last 10 years and is now approved by several health authorities worldwide.
Production
Rubidium-82 is produced by electron capture of its parent nucleus, strontium-82. The generator contains accelerator produced 82Sr adsorbed on stannic oxide in a lead-shielded column and provides a means for obtaining sterile nonpyrogenic solutions of rubidium chloride (halide salt form capable of injection). The amount (millicuries) of 82Rb obtained in each elution will depend on the potency of the generator. When eluted at a rate of 50 mL/minute, each generator eluate at the end of elution should not contain more than 0.02 microcuries of strontium 82Sr and not more than 0.2 microcuries of 85Sr per millicurie of 82RbCl injection, and not more than 1 microgram of tin per mL of eluate.
Pharmacology
Mechanism of action
82Rb has activity very similar to that of a potassium ion (K+). Once in the myocardium, it is an active participant in the sodium-potassium exchange pump of cells. It is rapidly extracted by the myocardium proportional to blood flow. Its radioactivity is increased in viable myocardial cells reflecting cellular retention, while the tracer is cleared rapidly from necrotic or infarcted tissue.
Pharmacodynamics
When tested clinically, 82Rb is seen in the myocardium within the first minute of intravenous injection. When the myocardium is affected with ischemia or infarction, they will be visualized between 2–7 minutes. These affected areas will be shown as photon deficient on the PET scan. 82Rb passes through the entire body on the first pass of circulation and has visible uptake in organs such as the kidney, liver, spleen and lung. This is due to the high vascularity of those organs.
Use in PET
Rubidium is rapidly extracted from the blood and is taken up by the myocardium in relation to myocardial perfusion, which requires energy for myocardial uptake through Na+/K+-ATPase similar to thallium-201. 82Rb is capable of producing a clear perfusion image similar to single photon emission computed tomography(SPECT)-MPI because it is an extractable tracer. The short half-life requires rapid image acquisition shortly after tracer administration, which reduces total study time. The short half-life also allows for less radiation experienced by the patient. A standard visual perfusion imaging assessment is based on defining regional uptake relative to the maximum uptake in the myocardium. Importantly, 82Rb PET also seems to provide prognostic value in patients who are obese and whose diagnosis remains uncertain after SPECT-MPI.
82Rb myocardial blood flow quantification is expected to improve the detection of multivessel coronary heart disease. 82Rb/PET is a valuable tool in ischemia identification. Myocardial Ischemia is an inadequate blood supply to the heart. 82Rb/PET can be used to quantify the myocardial flow reserve in the ventricles which then allows the medical professional to make an accurate diagnosis and prognosis of the patient. Various vasoreactivity studies are made possible through 82Rb/PET imaging due to its quantification of myocardial blood flow. It is possible to quantify stress in patients under the same reasoning. Recently it has been shown that neuroendocrine tumor metastasis can be imaged with 82Rb due to its ability to quantify myocardial blood flow (MBF) during rest and pharmacological stress, commonly performed with adenosine.
Advantages
One of the main advantages of 82Rb is its availability in nuclear medicine departments. This isotope is available after 10-minute elution of a 82Sr column; this makes it possible to produce enough samples to inject about 10–15 patients a day. Another advantage of 82Rb would be its high count density in myocardial tissue. 82Rb/PET has shown greater uniformity and count density than 99mTc-SPECT when examining the myocardium. This results in higher interpretive confidence and greater accuracy. It allows for quantification of coronary flow reserve and myocardial blood flow. 82Rb also has an advantage in that it has a very short half-life which results in much lower radiation exposure for the patient. This is especially important as the use of myocardial imaging increases in the medical field. When it comes to patients, 82Rb is beneficial to use when the patient is obese or physically unable to perform a stress test. It also has side effects limited to minor irritation around the injection site.
Limitations
A serious limitation of 82Rb would be its cost. Currently 99mTc costs on average $70 per dose, needing two doses; whereas 82Rb costs about $250 a dose. Another limitation of this isotope is that it needs a dedicated PET/CT camera, and in places like Europe where a 82Sr/82Rb generator is still yet to be approved that can be hard to find.
References
Further reading
Rubidium
Isotopes of rubidium
Positron emitters
Cardiac imaging
3D nuclear medical imaging
PET radiotracers
Medical isotopes | Rubidium-82 | [
"Chemistry"
] | 1,428 | [
"Medicinal radiochemistry",
"Isotopes of rubidium",
"Isotopes",
"PET radiotracers",
"Chemicals in medicine",
"Medical isotopes"
] |
33,391,213 | https://en.wikipedia.org/wiki/Electrochlorination | Electrochlorination is the process of producing hypochlorite by passing electric current through salt water. This disinfects the water and makes it safe for human use, such as for drinking water or swimming pools.
Process
The process of electrochlorination is a simple application based on the chloralkali process (in an unpartitioned cell).
It is the electrolysis of saltwater to produce a chlorinated solution. The first step is removing any solids from the saltwater. Next, the saltwater streams through an electrolyzer cell's channel of decreasing thickness. One side of the channel is a cathode, the other is an anode. A low voltage DC current is applied, electrolysis happens producing sodium hypochlorite and hydrogen gas (H2). The solution travels to a tank that separates the hydrogen gas based on its low density. Only water and sodium chloride are used. The simplified chemical reaction is:
NaCl + H2O + energy → NaOCl + H2
That is, energy is added to sodium chloride (table salt) in water, producing sodium hypochlorite and hydrogen gas.
Because the reaction takes place in an unpartitioned cell and NaOH is present in the same solution as the Cl2:
2 NaCl + 2 H2O → 2 NaOH + H2 + Cl2
any Cl2 disproportionates to hypochlorite and chloride
Cl2 + 2 NaOH → NaCl + NaClO + H2O
resulting in a hypochlorite solution.
Seawater
Companies may use seawater for this process due to its low cost. The water used is usually brackish water or brine (i.e. a solution with >0.5% salinity). In these cases, additional contaminant chemicals may be present in the water feed. The low voltage DC current still performs electrochlorination. The excess chemicals are left untouched and can be easily discarded.
Products
The product of the process, sodium hypochlorite, provides 0.7% to 1% chlorine. Anything below the concentration of 1% chlorine is considered a non-hazardous chemical although still a very effective disinfectant. The sodium hypochlorite produced is in the range of pH 6-7.5, relatively neutral in regards to acidity or baseness. At that pH range, the sodium hypochlorite is relatively stable.
Applications
Drinking water
Water treatment plants have evolved their technology over the years to tackle health threats due to water contamination eg cholera, typhoid, and dysentery. Treatment plants began to implement chlorination. Chlorination virtually wiped out both the spread and initial contamination of these diseases, and did so in a way that earned it the title of "probably the most significant public health advance of the millennium" from Life Magazine.
Electrochlorination is the next step in the evolution of this process. It chlorinates drinking water without producing environmental toxins. Unlike other chlorination techniques, electrochlorination generates no sludge or by-products other than hydrogen which must be managed safely. It is safer for the operators of the chlorinators as there is no handling of chlorine gas, which is highly toxic and corrosive. A risk assessment is required as the hydrogen released is flammable and explosive.
Swimming pools
When a swimmer enters a pool, they add up to one billion organisms to the water. Chlorination kills all organisms harmful to swimmers such as those that cause ear infections and athlete's foot. The advantages of electrochlorination in this process are as follows:
Not irritating to skin or soft tissue.
Active in small concentrations.
Longer lifespan of chemical and therefore less replacement necessary.
Easily measurable.
References
Electrochemistry
Chlorine | Electrochlorination | [
"Chemistry"
] | 801 | [
"Electrochemistry"
] |
33,395,921 | https://en.wikipedia.org/wiki/Haemadin | In molecular biology, haemadin is an anticoagulant peptide synthesised by the Indian leech, Haemadipsa sylvestris.
It adopts a secondary structure consisting of five short beta-strands (beta1-beta5), which are arranged in two antiparallel distorted sheets formed by strands beta1-beta4-beta5 and beta2-beta3 facing each other. This beta-sandwich is stabilised by six enclosed cysteines arranged in a [1-2, 3-5, 4-6] disulfide pairing resulting in a disulfide-rich hydrophobic core that is largely inaccessible to bulk solvent. The close proximity of disulfide bonds [3-5] and [4-6] organises haemadin into four distinct loops. The N-terminal segment of this domain binds to the active site of thrombin, inhibiting it.
Haemadin (MEROPS I14.002) belongs to a superfamily (MEROPS IM) of protease inhibitors that also includes hirudin (MEROPS I14.001) and antistasin (MEROPS I15).
References
Protein domains | Haemadin | [
"Biology"
] | 245 | [
"Protein domains",
"Protein classification"
] |
33,397,867 | https://en.wikipedia.org/wiki/Haemagglutination%20activity%20domain | In molecular biology, the haemagglutination activity domain is a conserved protein domain found near the N terminus of a number of large, repetitive bacterial proteins, including many proteins of over 2500 amino acids. A number of the members of this family have been designated adhesins, filamentous haemagglutinins, haem/haemopexin-binding protein, etc. Members generally have a signal sequence, then an intervening region, then the region described in this entry. Following this region, proteins typically have regions rich in repeats but may show no homology between the repeats of one member and the repeats of another. This domain is suggested to be a carbohydrate-dependent haemagglutination activity site.
In Bordetella pertussis, the infectious agent in childhood whooping cough, filamentous haemagglutinin (FHA) is a surface-exposed and secreted protein that acts as a major virulence attachment factor, functioning as both a primary adhesin and an immunomodulator to bind the bacterial to cells of the respiratory epithelium. The FHA molecule has a globular head that consists of two domains: a shaft and a flexible tail. Its sequence contains two regions of tandem 19-residue repeats, where the repeat motif consists of short beta-strands separated by beta-turns.
References
Protein domains | Haemagglutination activity domain | [
"Biology"
] | 291 | [
"Protein domains",
"Protein classification"
] |
33,404,658 | https://en.wikipedia.org/wiki/Treehouse%20of%20Horror%20XXIII | "Treehouse of Horror XXIII" is the second episode of the twenty-fourth season of the American animated television series The Simpsons. The episode was directed by Steven Dean Moore and written by David Mandel and Brian Kelley. It first aired on the Fox network in the United States on October 7, 2012. In the United Kingdom and Ireland, the episode aired on Sky 1 on March 24, 2013 with 1,312,000 viewers, making it the most watched program that week.
In this three-part anthology episode, in the first installment a black hole appears in Springfield. In the second installment demonic activity occurs in the Simpsons' house. In the third installment Bart travels back in time, which interferes with Homer and Marge's past. Jon Lovitz guest stars as Artie Ziff. The episode received positive reviews. Animator Paul Wee won an Emmy Award for Individual Achievement in Animation for this episode, which also received an Emmy nomination for Outstanding Animated Program.
Plot
Opening sequence
At the height of the Maya civilization, in the city of Chichen Itza, a sacrifice is about to take place to prevent the end of the world from happening at the end of the 13th Baktun and the Mayan Calendar. A Mayan Homer, who has been fattened up, showing that he is ready to be sacrificed, hears about it for the first time (as he did not pay attention during orientation) and attempts to back out to no avail. However, his wife, a Mayan Marge, tricks a priest, a Mayan Moe, into getting himself sacrificed instead by promising him sex. After the sacrifice, a Mayan Professor Frink confirms that the world will end after the 13th Baktun, which, accounting for the Gregorian calendar and the birth of Jesus, puts the end of days in the year 2012 (with the Mayan Mayor Quimby placing the blame on President Barack Obama).
In the present, Homer encounters three Mayan stone gods, mistaking them for the trick-or-treaters. One of them crushes Homer underfoot, then the second one jumps on Flanders' house. The stone trio start to wreak havoc on Springfield and the Earth with one stone god throwing fireballs at Springfield City Hall and throwing Lard Lad's donut at a UFO and then taking this to popular landmarks like moving the Eiffel Tower to crash it into Big Ben, ripping up the Great Wall of China; causing it to sink to a river of lava, making George Washington's head kiss Abraham Lincoln's head in Mount Rushmore; causing rivers of lava to appear, and splitting Earth into large fissures. Once their destruction is done, they high five to show their success on wreaking havoc in Earth, then fly off, only leaving the Earth to explode, replacing it with what appears to be the Earth's infrastructure or blood. The text reads the title of the episode.
The Greatest Story Ever Holed
The citizens of Springfield gather to witness the activation of the Springfield Particle Accelerator; they originally wanted to use the money to build a new baseball stadium, but Lisa convinced them otherwise. Professor Frink activates the machine and it works, but nothing exciting happens, and everyone blames Lisa for her suggestion. When everyone is gone, two particles collide with each other and create a small black hole which floats off. Lisa finds it, and after it sucks up Ralph and Nelson, she takes it home so that it will not cause any more trouble. The Simpsons put it in the basement and Lisa warns them not to throw anything in it or otherwise it will grow bigger. Despite the warning, Homer, Bart and Marge use it as a trash disposal, with Santa's Little Helper using it off-camera to get rid of Snowball II, and Homer even opens a business allowing people to throw their junk into it. The black hole becomes huge and consumes everything in sight. The only person who is not sucked in is Maggie, whose pacifier flies into the black hole, inexplicably stopping it. Meanwhile, all of Springfield has been warped to an alternate universe, where aliens worship their trash.
Un-normal Activity
In a Paranormal Activity homage, when strange events occur at the Simpson house, Homer sets up cameras to photograph what is haunting them. The culprit is revealed to be a Moe-like demon with whom Marge made a deal to save her sisters when they summoned the demon as part of a Satanic ritual. As part of the deal, the demon would return 30 years later to take Marge's favorite child as payment (which turns out to be Maggie, much to Lisa's shock). Homer manages to convince the demon to relinquish the bargain in return for Homer to reluctantly engage in three-way sex with him and another demon. After learning that the safe word is cinnamon, Homer throws his robe over the camera saying he'd like to try something and the Moe-like demon is heard yelling 'cinnamon'.
Bart and Homer's Excellent Adventure
In a parody of Back to the Future, Bart travels back to 1974 in Professor Frink's time machine to buy a comic book for 25 cents instead of the current $200 price at the Android's Dungeon. He then finds Homer in high school, just moments before he meets Marge for the first time (as seen in the season two episode "The Way We Was"). Before Bart returns to 2012, he selfishly tells Marge (who is already angry at teenage Homer for strangling Bart and constant demanding over her to be his prom date) to never marry Homer. When Bart returns to 2012, he finds that Artie Ziff is now his father and the family is rich and successful, to the point where Nelson Muntz is now hired as Bart's butler and personal punching bag. 1974 Homer, who stowed away in the trunk of the time machine, finds out about Marge and meets 2012 Homer, who wants Marge. The two summon every time incarnation of Homer (dubbed "The United Federation of Homers Throughout History") to beat up Artie. Though the Homers lose badly despite greatly outnumbering Artie, they wind up winning over Marge, who then takes pity on the beaten Homers and lets all of them live with her.
Production
This is the first episode of The Simpsons co-written by David Mandel. Executive producer Al Jean stated that the first act about the black hole ties in with the discovery of the Higgs boson in the summer of 2012. A preview of this segment was shown at San Diego Comic-Con in 2012.
Jon Lovitz reprised his role as Artie Ziff in the Back to the Future parody.
Reception
Ratings
The episode received a 3.1 in the 18-49 demographic, coming second in the Animation Domination lineup behind Family Guy, which had a 3.4. It earned a total viewership of 6.57 million, also coming in second behind Family Guy, which had 6.70 million viewers, but beating American Dad!, Bob's Burgers, and The Cleveland Show.
Critical reception
Robert David Sullivan of The A.V. Club gave the episode a B and gave a fairly positive review, commenting, "In the early years of The Simpsons, the annual 'Treehouse Of Horror' outing was a fun contrast to most of the show’s episodes. There was no warmth, no subtlety, no lessons learned, and no attempt at a coherent story—just a lot of gross-out humor and a chance to see Springfield stretched even further past reality. Now that entire show has adopted these qualities, the Halloween tradition doesn't seem as special. But, like the couch gag at the start of each episode, 'Treehouse Of Horror' tempts us with the chance to see something that doesn't feel borrowed (and a bit dumbed down) from the show’s glory years."
Teresa Lopez of TV Fanatic gave the episode 4 out of 5 stars. She liked the final two acts, especially the look at the Ziff family in the final act. She thought the first act was the weakest because of the humor coming from aliens worshiping garbage.
Screen Rant called it the best episode of the 24th season.
Awards and nominations
Animator Paul Wee won the Primetime Emmy Award for Outstanding Individual Achievement in Animation at the 65th Primetime Creative Arts Emmy Awards for this episode. The episode also was nominated for the Primetime Emmy Award for Outstanding Animated Program at the same award ceremony.
Writers David Mandel & Brian Kelley received a nomination for the Writers Guild of America Award for Outstanding Writing in Animation at the 65th Writers Guild of America Awards for their script to this episode.
Composer Alf Clausen was nominated for the Annie Award for Outstanding Achievement for Music in an Animated Television/Broadcast Production at the 40th Annie Awards for this episode.
References
External links
2012 American television episodes
The Simpsons season 24 episodes
Treehouse of Horror
Television episodes written by David Mandel
Television episodes about curses
Television episodes about time travel
Fiction about black holes
Television episodes about demons
Halloween television episodes
2012 phenomenon
Television episodes set in the 1970s
Television episodes set in the 2010s
Fiction set in 1974
Fiction set in 2012
Television episodes written by Brian Kelley (writer)
Television episodes directed by Steven Dean Moore | Treehouse of Horror XXIII | [
"Physics"
] | 1,881 | [
"Black holes",
"Unsolved problems in physics",
"Fiction about black holes"
] |
38,907,720 | https://en.wikipedia.org/wiki/Autowave | Autowaves are self-supporting non-linear waves in active media (i.e. those that provide distributed energy sources). The term is generally used in processes where the waves carry relatively low energy, which is necessary for synchronization or switching the active medium.
Introduction
Relevance and significance
In 1980, the Soviet scientists G.R. Ivanitsky, V.I. Krinsky, A.N. Zaikin, A.M. Zhabotinsky, B.P. Belousov became winners of the highest state award of the USSR, Lenin Prize "for the discovery of a new class of autowave processes and the study of them in disturbance of stability of the distributed excitable systems."
A brief history of autowave researches
The first who studied actively the self-oscillations was Academician AA Andronov, and the term "auto-oscillations" in Russian terminology was introduced by AA Andronov in 1928. His followers from Lobachevsky University further contributed greatly to the development of autowave theory.
The simplest autowave equations describing combustion processes have been studied by A.N. Kolmogorov, I.E. Petrovsky, N.S. Piskunov in 1937., as well as by Ya.B. Zel'dovich и D.A. Frank-Kamenetsky in 1938.
The classical axiomatic model with autowaves in myocardium was published in 1946 by Norbert Wiener and Arturo Rosenblueth.
During 1970-80, major efforts to study autowaves were concentrated in the Institute of Biological Physics of the USSR Academy of Sciences, located in the suburban town Pushchino, near Moscow. It was here, under the guidance of V.I.Krinsky, such world-famous now experts in the field of the autowave researches as A.V.Panfilov, I.R.Efimov, R.R.Aliev, K.I. Agladze, O.A.Mornev, M.A.Tsyganov were educated and trained. V.V.Biktashev, Yu.E. Elkin, A.V. Moskalenko gained their experience with the autowave theory also in Pushchino, in the neighboring Institute of Mathematical Problems of Biology, under the guidance of E.E.Shnoll.
The term "autowaves" was proposed, probably, on the analogy of previously "auto-oscillations".
Almost immediately after the Dissolution of the Soviet Union, many of these Russian scientists left their native country for working in foreign institutions, where they still continue their studies of autowaves. In particular, E.R.Efimov is developing the theory of virtual electrode, which describes some effects occurring during defibrillation.
Among other notable scientists, who are engaged in these investigation, there are A.N. Zaikin and E.E.Shnoll (autowaves and bifurcation memory in the blood coagulation system); A.Yu. Loskutov (general autowave theory as well as dynamic chaos in autowaves); V.G. Yakhno (general autowave theory as well as connections between autowaves and process of thinking); K.I. Agladze (autowaves in chemical media); V.N.Biktashev (general autowave theory as well as different sorts of autowave drift); O.A.Mornev (general autowave theory); M.A.Tsyganov (the role of autowave in population dynamics); Yu.E. Elkin, A.V. Moskalenko, (bifurcation memory in a model of cardiac tissue).
A huge role in the study of autowave models of cardiac tissue belongs to Denis Noble and members of his team from the University of Oxford.
The basic definitions
One of the first definitions of autowaves was as follows:
Unlike linear waves — such as sound waves, electromagnetic waves and other, which are inherent in conservative systems and mathematically described by linear second order hyperbolic equations (wave equations), — dynamics of an autowave in terms of differential equations can be described by parabolic equation with nonlinear free member of a special form.
The concrete form of the free member is extremely important, because:
Commonly, have the form of -shaped dependence on . In this sense, the system of equations, known as the Aliev–Panfilov model, is a very exotic example, because has in it a very complex form of two intersecting parabolas, besides more crossed with two straight lines, resulting in a more pronounced nonlinear properties of this model.
Autowaves is an example of a self-sustaining wave process in extensive nonlinear systems containing distributed energy sources. It is correct for simple autowaves, that period, wavelength, propagation speed, amplitude, and some other characteristics of an autowave are determined solely by local properties of the medium. However, in the 21st century, researchers began to discover a growing number of examples of self-wave solutions when the "classical" principle is violated.
(See also general information in literature, for example, in).
The simplest examples
The simplest model of autowave is a rank of dominos that are falling one after another, if you drop an outermost one (so called "domino effect"). This is an example of a switching wave.
As another example of autowaves, imagine that you stand on a field and set fire to the grass. While the temperature is below the threshold, the grass will not take fire. Upon reaching the threshold temperature (autoignition temperature) the combustion process begins, with the release of heat sufficient to ignite the nearest areas. The result is that the combustion front has been shaped, which spreads through the field. It can be said in such cases that autowave arose, which is one of the results of self-organization in non-equilibrium thermodynamic systems. After some time new grass replaces the burnt grass, and the field acquires again the ability for igniting. This is an example of an excitation wave.
There are a great deal of other natural objects that are also considered among autowave processes: oscillatory chemical reactions in active media (e.g., Belousov–Zhabotinsky reaction), the spread of excitation pulses along nerve fibres, wave chemical signalling in the colonies of certain microorganisms, autowaves in ferroelectric and semiconductor films, population waves, spread of epidemics and of genes, and many other phenomena.
Nerve impulses, which serve as a typical example of autowaves in an active medium with recovery, were studied as far back as 1850 by Hermann von Helmholtz. The properties of nerve impulses that are typical for the simplest self-wave solutions (universal shape and amplitude, independent of the initial conditions, and annihilation under collisions) were ascertained in the 1920s and 1930s.
Consider a 2D active medium consisting of elements, each of which can be found in three different states: rest, excitation and refractoriness. In the absence of external influence, elements are at rest. As a result of an influence upon it, when the concentration of the activator reaches the threshold, the element will switch to an excited state, acquiring the ability to excite the neighbouring elements. Some time after the excitation the element switches to a refractory state, in which it cannot be excited. Then the element return to its initial state of rest, gaining again the ability to transform into an excited state.
Any "classical" excitation wave moves in an excitable medium without attenuation, maintaining its shape and amplitude constant. As it passes, the energy loss (dissipation) is completely offset by the energy input from the elements of the active medium. The leading front of an autowave (the transition from rest to a state of excitation) is usually very small: for example, the ratio of the leading front duration to the entire duration of the pulse for a myocardium sample is about 1:330.
Unique opportunities to study the autowave processes in two- and three-dimensional active media with very different kinetics are provided with methods of mathematical modelling using computers. For computer simulation of autowaves, one uses a generalized Wiener–Rosenblueth model, as well as a large number of other models, among which a special place is occupied by The FitzHugh–Nagumo model (the simplest model of an active medium, and its various versions) and The Hodgkin–Huxley model (nerve impulse). There are also many autowave myocardial models: The Beeler–Reuter model, several Noble models (developed by Denis Noble), The Aliev–Panfilov model, the Fenton–Karma model, etc.
Basic properties of autowaves
It was also proven that the simplest autowave regimes should be common to every system of differential equations of any complexity that describe a particular active media, because such a system can be simplified to two differential equations.
Main known autowave objects
First of all, the elements of the active media can be, at least, of three very different types; these are self-exciting, excitable and trigger (or bistable) regimes. Accordingly, there are three types of homogeneous active media composed of these elements.
A bistable element has two stable stationary states, transitions between which occur when external influence exceeds a certain threshold. In media of such elements, switching waves arise, which switch the medium from one of its states to the other. For instance, a classic case of such a switching autowave — perhaps, the simplest autowave phenomena — is falling dominoes (the example already given). Another simple example of a bistable medium is burning paper: the switching wave propagates in the form of a flame, switching paper from the normal state to its ashes.
An excitable element has only one stable stationary state. External influence over a threshold level can bring such an element out of its stationary state and perform an evolution before the element will return again to its stationary state. During such evolution, the active element can affect the adjacent elements and, in turn, lead them out of the stationary state too. As a result, the excitation wave propagates in this medium. This is the most common form of autowaves in biological media, such as nervous tissue, or the myocardium.
A self-oscillating element has no stationary states and continually performs stable oscillations of some fixed form, amplitude and frequency. External influence can disturb these oscillations. After some relaxation time, all their characteristics except for the phase back to its stable value, but the phase can be changed. As a result, the phase waves spread in the medium of such elements. Such phase waves can be observed in electro-garlands or in certain chemical media. An example of a self-oscillating medium is the SA node in the heart, in which excitation pulses arise spontaneously.
It can be clearly seen on the phase portrait of the basic system of equations describing the active medium (see Fig.) that a significant difference between these three types of behaviour of an active medium is caused by the quantity and the position of its singular points. The shape of autowaves observed in reality can be very similar to each other, and therefore it can be difficult to assess the type of element only by the form of the excitation pulse.
Besides, autowave phenomena, which can be observed and investigated, depend greatly on geometrical and topological peculiarities of an active medium.
One-dimensional autowaves
One-dimensional cases include autowave spread in cable and its spread in the ring, with the latter mode considering as a limiting case of a rotating wave in two-dimensional active medium, while the first case is considered as spread of the autowave in the ring with zero curvature (i.e., with an infinite radius).
Two-dimensional autowaves
A number of autowave sources is known in the two-dimensional active media. In such a way, it is distinguished at least five type of re-entry, which are running around the ring, spiral wave, reverberator (i.e., two-dimensional autowave vortex) and fibrillation. The literature identifies two types of sources of concentric autowaves in 2D active media; these are pacemakers and leading centres. Both the leading centres and reverberators are interesting, because they are not tied to the structure of the medium and can appear and disappear in its different parts. Areas of increased automation may also be an example of a sources of autowaves. Three different types of increased automation are known now:
induced automatism
trigger automatism with the mechanism of early postdepolarisation
trigger automatism with the mechanism of late postdepolarisation.
In addition about 2D
See also details in the article rotating autowaves, which may appears as spiral wave or autowave reverberator.
Phenomena of bifurcation memory were observed in behaviour of the autowave reverberator in the Aliev–Panfilov model.
Three-dimensional autowaves
3D.
Examples of autowave processes in nature
Autowave regime of boiling
Autowaves in chemical solutions
An example of a chemical reaction, which in certain circumstances may produce autowave, is the Belousov–Zhabotinsky reaction.
Autowave models of biological tissues
Autowave models of retina
Autowave models of nerve fibres
The main item on the page "Hodgkin–Huxley model"
Autowave models of myocardium
The classical Wiener—Rosenblueth model, which is, accordingly, developed by Norbert Wiener and Arturo Rosenblueth.
Among other examples are the following: FitxHue-Nagumo, the Beeler-Reuter model.
Main article is planned to be on the special page "Autowave models of myocardium"
Autowaves in blood coagulation system
See References.
The population autowaves
Examples of individual-based models of population autowaves
See also
Dissipation
Excitable medium
Partial differential equation
Parabolic partial differential equation
Reaction–diffusion system
Self-oscillation
Self-organization
Cardiophysics
Refractory period (physiology)
Wave
:ru:Нелинейная волна
Standing wave
Resonance
Phase velocity
Notes
References
Books
Papers
External links
Several simple classical models of autowaves (JS + WebGL), that can be run directly in your web browser; developed by Evgeny Demidov.
Biophysics
Computational science
Biomedical cybernetics
Nonlinear systems
Mathematical modeling
Parabolic partial differential equations | Autowave | [
"Physics",
"Mathematics",
"Biology"
] | 3,056 | [
"Mathematical modeling",
"Applied and interdisciplinary physics",
"Applied mathematics",
"Computational science",
"Nonlinear systems",
"Biophysics",
"Dynamical systems"
] |
38,908,868 | https://en.wikipedia.org/wiki/Targeted%20mass%20spectrometry | Targeted mass spectrometry is a mass spectrometry technique that uses multiple stages of tandem mass spectrometry (MSn with n=2 or 3) for ions of specific mass (m/z), at specific time. The values of the m/z and time are defined in an inclusion list which is derived from a previous analysis.
Applications
Targeted analysis allows the thorough analysis of all ions, at all abundance range above the noise level, at any time window in the experiment. In contrast, non-targeted analysis would, typically, only allow detection of the most abundant 50-100 ions over the entire experiment time. Such limitation of non-targeted analysis makes it less suitable for analyzing highly complex, highly dynamic sample such as human blood serum.
However, the methods of utilizing targeted mass spectrometry are still at a primitive stage, in the sense that the inclusion list used in the targeted analysis is typically manually typed-in by scientists. In addition to that, only one inclusion list is allowed for the entire experiment. Such manual process is both labor-intensive and error-prone. This is largely due to the lack of software to control the mass spectrometer.
Automation
There have been some efforts in automating the generation of inclusion lists through the solution of external software. In 2010, Wu et al. introduced a semi-automatic method in an effort of identifying low-abundance glyco-peptide. They implemented the automation through iterative experiments and the open-source software GLYPID. With minor modification, this approach can be used in analyzing any other simple or complex samples. In addition to the advantage mentioned before, this semi-automated approach also saves substantial amount of time and efforts for scientists in manually picking ions and re-calibrating instruments.
See also
Data-independent acquisition
References
Mass spectrometry | Targeted mass spectrometry | [
"Physics",
"Chemistry"
] | 374 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
38,914,040 | https://en.wikipedia.org/wiki/Elastomeric%20bridge%20bearing | An elastomeric bridge bearing, also known as a pot bearing or elastomeric bearing, is a commonly used modern bridge bearing. The term encompasses several different types of bearings including bearing pads, bridge bearings, laminated elastomeric bearings, and seismic isolators ... which are all generally referred to as
"bridge bearing pads" in the construction industry.
The purpose of the elastomeric bearings is to support a bridge or other heavy structure in a way that permits the load to shift slightly, in a horizontal direction, relative to the ground or foundation. Without such bearings, the bridge support might crack or fracture when it moves due to ground movements or thermal expansion and contraction. Elastomeric bearing pads compress on vertical load and accommodate both horizontal rotation and horizontal shear movement.
The internal structure of an elastomeric bearing consists of a three layers: a lower "pot" made of steel, which rests on the foundation or footing; a relatively thin elastomeric pad (a rectangle or disk shape) resting on the lower pot; and a steel plate loosely set on top of the elastomeric disk, on top of which the weight of the bridge rests. The bearings are often produced as a unit, ready to be installed.
The elastomeric pad may made from any of several materials, including natural rubber, elastomers, teflon, or synthetic rubber (such as neoprene).
Elastomeric bearing pads are the most economical solution used in construction of large span bridges and buildings.
Elastomeric bearings are often used in applications other than bridges, for example, supporting buildings that are built on soil that may shift slightly and cause a concrete load to crack in the absence of a elastomeric bearing.
Elastomeric bearings are designed and manufactured based on standards and specifications of such organizations as British Standard, AASHTO, and European Norms En 1337.
References
External links
Pretread page on bridge bearing pads - a UAE based company which supplies bridge bearing pads
Bearings (mechanical)
Bridge components | Elastomeric bridge bearing | [
"Technology",
"Engineering"
] | 425 | [
"Civil engineering stubs",
"Bridge components",
"Civil engineering",
"Components"
] |
38,914,982 | https://en.wikipedia.org/wiki/Microbial%20phylogenetics | Microbial phylogenetics is the study of the manner in which various groups of microorganisms are genetically related. This helps to trace their evolution. To study these relationships biologists rely on comparative genomics, as physiology and comparative anatomy are not possible methods.
History
1960s–1970s
Microbial phylogenetics emerged as a field of study in the 1960s, scientists started to create genealogical trees based on differences in the order of amino acids of proteins and nucleotides of genes instead of using comparative anatomy and physiology.
One of the most important figures in the early stage of this field is Carl Woese, who in his researches, focused on Bacteria, looking at RNA instead of proteins. More specifically, he decided to compare the small subunit ribosomal RNA (16rRNA) oligonucleotides. Matching oligonucleotides in different bacteria could be compared to one another to determine how closely the organisms were related. In 1977, after collecting and comparing 16s rRNA fragments for almost 200 species of bacteria, Woese and his team in 1977 concluded that Archaebacteria were not part of Bacteria but completely independent organisms.
1980s–1990s
In the 1980s microbial phylogenetics went into its golden age, as the techniques for sequencing RNA and DNA improved greatly. For example, comparison of the nucleotide sequences of whole genes was facilitated by the development of the means to clone DNA, making possible to create many copies of sequences from minute samples. Of incredible impact for the microbial phylogenetics was the invention of the polymerase chain reaction (PCR). All these new techniques led to the formal proposal of the three domains of life: Bacteria, Archaea (Woese himself proposed this name to replace the old nomination of Archaebacteria), and Eukarya, arguably one of the key passage in the history of taxonomy.
One of the intrinsic problems of studying microbial organisms was the dependence of the studies from pure culture in a laboratory. Biologists tried to overcome this limitation by sequencing rRNA genes obtained from DNA isolated directly from the environment. This technique made possible to fully appreciate that bacteria, not only to have the greatest diversity but to constitute the greatest biomass on earth.
In the late 1990s sequencing of genomes from various microbial organisms started and by 2005, 260 complete genomes had been sequenced resulting in the classification of 33 eucaryotes, 206 eubacteria, and 21 archeons.
2000s
In the early 2000s, scientists started creating phylogenetic trees based not on rRNA, but on other genes with different function (for example the gene for the enzyme RNA polymerase). The resulting genealogies differed greatly from the ones based on the rRNA. These gene histories were so different between them that the only hypothesis that could explain these divergences was a major influence of horizontal gene transfer (HGT), a mechanism which permits a bacterium to acquire one or more genes from a completely unrelated organism. HGT explains why similarities and differences in some genes have to be carefully studied before being used as a measure of genealogical relationship for microbial organisms.
Studies aimed at understanding the widespread of HGT suggested that the ease with which genes are transferred among bacteria made impossible to apply ‘the biological species concept’ for them.
Phylogenetic representation
Since Darwin, every phylogeny for every organism has been represented in the form of a tree. Nonetheless, due to the great role that HGT plays for microbes some evolutionary microbiologists suggested abandoning this classical view in favor of a representation of genealogies more closely resembling a web, also known as network. However, there are some issues with this network representation, such as the inability to precisely establish the donor organism for a HGT event and the difficulty to determine the correct path across organisms when multiple HGT events happened. Therefore, there is not still a consensus between biologists on which representation is a better fit for the microbial world.
Methods for Microbial Phylogenetic Analysis
Most microbial taxa have never been cultivated or experimentally characterized. Utilizing taxonomy and phylogeny are essential tools for organizing the diversity of life. Collecting gene sequences, aligning such sequences based on homologies and thus using models of mutation to infer evolutionary history are common methods to estimate microbial phylogenies. Small subunit (SSU) rRNA (SSU rRNA) have revolutionized microbial classification since the 1970s and has since become the most sequenced gene. Phylogenetic inferences are determined based on the genes chosen, for example, 16S rRNA gene is commonly selected to investigate inferences in Bacteria and Archaea, and microbial eukaryotes most commonly use the 18S RNA gene.
Phylogenetic comparative methods
Phylogenetic comparative methods (PCMs) are commonly utilized to compare multiple traits across organisms. Within the scope of microbiome studies, it is not common for the use of PCMs, however, recent studies have been successful in identifying genes associated with colonization of human gut. This challenge was addressed through measuring the statistical association between a species that harbors the gene and the probability the species is present in the gut microbiome. The analyses showcase the combination of shotgun metagenomics paired with phylogenetically aware models.
Ancestral state reconstruction
This method is commonly used for estimation of genetic and metabolic profiles of extant communities using a set of reference genomes, commonly performed with PICRUSt (Phylogenetic Investigation of Communities by Reconstructing of Unobserved States) in microbiome studies. PICRUSt is a computational approach capable of prediction functional composition of a metagenome with marker data and a database of reference genomes. To predict which gene families are present, PICRUSt uses extended ancestral-state reconstruction algorithm and then combines the gene families to estimate composite metagenome.
Analysis of phylogenetic variables and distances
Phylogenetic variables are used to describe variables that are constructed using features in the phylogeny to summarize and contrast data of species in the phylogenetic tree. Microbiome datasets can be simplifies using phylogenetic variables by reducing the dimensions of the data to a few variables carrying biological information. Recent methods such as PhILR and phylofactorization address the challenges of phylogenetic variables analysis. The PhILR transform combines statistical and phylogenetic models to overcome compositional data challenges. Incorporating both microbial evolutionary models with the isometric log-ratio transform creates the PhILR transform. Phylofactorization is a dimensionality-reducing tool used to identify edges in the phylogeny from which putative functional ecological traits may have arisen.
Challenges
Inferences in phylogenetics requires the assumption of common ancestry or homology but when this assumption is violated the signal can be disrupted by noise. It is possible for microbial traits to be unrelated due to horizontal gene transfer causing the taxonomic composition to reveal little about the function of a system.
See also
Comparative genomics
Phylogenomics
Multilocus sequence typing
Bacterial taxonomy
Computational phylogenetics
History of molecular evolution
Molecular phylogenetics
Phylogenetics
References
Phylogenetics
Microorganisms
Eukaryotic microbiology | Microbial phylogenetics | [
"Biology"
] | 1,420 | [
"Bioinformatics",
"Phylogenetics",
"Microorganisms",
"Taxonomy (biology)"
] |
37,463,874 | https://en.wikipedia.org/wiki/C21H36O5 | {{DISPLAYTITLE:C21H36O5}}
The molecular formula C21H36O5 (molar mass: 368.51 g/mol, exact mass: 368.2563 u) may refer to:
Betaenone B
Carboprost
Constipatic acid
Molecular formulas | C21H36O5 | [
"Physics",
"Chemistry"
] | 68 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
37,470,192 | https://en.wikipedia.org/wiki/Nullomers | Nullomers are short sequences of DNA that do not occur in the genome of a species (for example, humans), even though they are theoretically possible. Nullomers must be under selective pressure - for example, they may be toxic to the cell. Some nullomers have been shown to be useful to treat leukemia, breast, and prostate cancer. They are not useful in healthy cells because normal cells adapt and become immune to them. Nullomers are also being developed for use as DNA tags to prevent cross contamination when analyzing crime scene material.
Background
Nullomers are naturally occurring but potentially unused sequences of DNA. Determining these "forbidden" sequences can improve the understanding of the basic rules that govern sequence evolution. Sequencing entire genomes has shown that there is a high level of non-uniformity in genomic sequences. When a codon is artificially substituted with a synonymous codon, it often results in a lethal change and cell death. This is believed to be due to ribosomal stalling and early termination of protein synthesis. For example, both AGA and CGA code for arginine in bacteria; however, bacteria almost never use AGA, and when substituted it proves lethal. Such codon biases have been observed in all species, and are examples of constraints on sequence evolution. Other sequences may have selective pressure; for example, GG-rich sequences are used as sacrificial sinks for oxidative damage because oxidizing agents are attracted to regions with GG-rich sequences and then induce strand breakage. Moreover, it has been shown that statistically significant nullomers (i.e. absent short sequences which are highly expected to exist) in virus genomes are restriction recognition sites indicating that viruses have probably got rid of these motifs to facilitate invasion of bacterial hosts. Nullomers Database provides a comprehensive collection of minimal absent sequences from hundreds of species and viruses as well as the human and mouse proteomes.
Cancer Treatment
Nullomers have been used as an approach to drug discovery and development. Nullomer peptides were screened for anti-cancer action. Absent sequences have short polyarginine tails added to increase solubility and uptake into the cell, producing peptides called PolyArgNulloPs. One successful sequence, RRRRRNWMWC, was demonstrated to have lethal effects in breast and prostate cancer. It damaged mitochondria by increasing ROS production, which reduced ATP production, leading to cell growth inhibition and cell death. Normal cells show a decreased sensitivity to PolyArgNulloPs over time.
Forensics
Accidental transfer of biological material containing DNA can produce misleading results. This is a particularly important consideration in forensic and crime labs, where mistakes can cause an innocent person to be convicted of a crime. There was no way to detect if a reference sample was mislabeled as evidence or if a forensic sample is contaminated, but a nullomer barcode can be added to reference samples to distinguish them from evidence on analysis. Tagging can be carried out during sample collection without affecting genotype or quantification results. Impregnated filter paper with various nullomers can be used to soak up and store DNA samples from a crime scene, making the technology simple and effective. Tagging with nullomers can be detected—even when diluted to a million-fold and spilled on evidence, these tags are still clearly detected. Tagging in this way supports National Research Council's recommendations on quality control to reduce fraud and mistakes.
References
Amino acids
DNA
Genetics techniques
Genomics
Human mitochondrial genetics
Nucleotides | Nullomers | [
"Chemistry",
"Engineering",
"Biology"
] | 730 | [
"Amino acids",
"Biomolecules by chemical classification",
"Genetics techniques",
"Genetic engineering"
] |
31,827,826 | https://en.wikipedia.org/wiki/TIARA%20%28database%29 | The Integrated Archive of Short-Read and Array (TIARA) database contains personal genomic information obtained from next generation sequencing techniques and ultra-high-resolution comparative genomic hybridization.
See also
Personal genomics
References
External links
http://tiara.gmi.ac.kr
DNA sequencing
Genome databases | TIARA (database) | [
"Chemistry",
"Biology"
] | 65 | [
"Molecular biology techniques",
"DNA sequencing"
] |
31,830,485 | https://en.wikipedia.org/wiki/WASP-43 | WASP-43 is a K-type star about away in the Sextans constellation. It is about half the size of the Sun, and has approximately half the mass. WASP-43 has one known planet in orbit, a Hot Jupiter called WASP-43b. At the time of publishing of WASP-43b's discovery on April 15, 2011, the planet was the most closely orbiting Hot Jupiter discovered. The small orbit of WASP-43b is thought to be caused by WASP-43's unusually low mass. WASP-43 was first observed between January and May 2009 by the SuperWASP project, and was found to be cooler and slightly richer in metals than the Sun. WASP-43 has also been found to be an active star that rotates at a high velocity.
Nomenclature
The designation WASP-43 indicates that this was the 43rd star found to have a planet by the Wide Angle Search for Planets.
In August 2022, this planetary system was included among 20 systems to be named by the third NameExoWorlds project. The approved names, proposed by a team from Romania, were announced in June 2023. WASP-43 is named Gnomon and its planet is named Astrolábos, after the gnomon and the Greek word for the astrolabe.
Observational history
WASP-43 was first observed by the WASP-South part of the planet-searching SuperWASP project between January and May 2009. It was determined from the collected data that WASP-43 could potentially host a planet that transited, or crossed in front of, its host star as seen from Earth. Later observations by both the WASP-South and SuperWASP-North sections of SuperWASP between January and May 2010 yielded a total of 13,768 data points.
Scientists interpreted that a 0.81-day orbit of a possible planet from the data, and followed up with observations using the CORALIE spectrograph on the Leonhard Euler Telescope at Chile's La Silla Observatory. CORALIE provided radial velocity measurements that indicated that WASP-43 was being transited by a planet that was 1.8 times Jupiter's mass, now dubbed WASP-43b. Another follow-up using the TRAPPIST telescope further defined the light curve of the body transiting WASP-43.
WASP-43b's discovery was reported on April 15, 2011 in the journal Astronomy and Astrophysics.
Characteristics
WASP-43 is a K-type star with a mass that is 0.72 times that of the Sun, and a radius that is 0.67 times that of the Sun. With an effective temperature of 4400 K, WASP-43 is cooler than the Sun. It also has slightly lower quantities of iron than the Sun, with a measured metallicity of [Fe/H] = -0.05 (89% of that measured in the Sun). However, in general, the star has a slightly larger quantity of metals than the Sun. A notable exception is lithium, which is not present in WASP-43's spectrum. However, the star's spectrum also indicates that WASP-43 is an active star. WASP-43 has been found to rotate quickly, although the exact mechanism that causes such speed in this rotation is uncertain, it may be possible that this is caused by tidal interactions between WASP-43 and its planet.
With an apparent magnitude of 12.4, WASP-43 cannot be seen with the unaided eye. The star is located approximately 80 parsecs (260 light years) away from Earth.
Planetary system
WASP-43b is a Hot Jupiter with a mass that is 1.78 times the mass of Jupiter and a radius that is 0.93 times Jupiter's radius. WASP-43b orbits its host star every 0.813475 days (19.5234 hours) at a distance of 0.0142 AU, the closest orbit yet found at the time of WASP-43b's discovery. WASP-43's unusually low mass accounts for WASP-43b's small orbit. Because planets with orbits around stars like WASP-43 are not usually observed, models either suggest that planets like WASP-43b are either uncommon or have short lifetimes caused by a decay in their orbits. WASP-43b has a density of 2.20 g/cm3.
References
Planetary systems with one confirmed planet
Sextans
Planetary transit variables
K-type main-sequence stars
J10193800-0948225
043
0656
Gnomon | WASP-43 | [
"Astronomy"
] | 923 | [
"Constellations",
"Sextans",
"Astronomy organizations",
"Wide Angle Search for Planets"
] |
31,831,357 | https://en.wikipedia.org/wiki/Glossary%20of%20elementary%20quantum%20mechanics | This is a glossary for the terminology often encountered in undergraduate quantum mechanics courses.
Cautions:
Different authors may have different definitions for the same term.
The discussions are restricted to Schrödinger picture and non-relativistic quantum mechanics.
Notation:
- position eigenstate
- wave function of the state of the system
- total wave function of a system
- wave function of a system (maybe a particle)
- wave function of a particle in position representation, equal to
Formalism
Kinematical postulates
a complete set of wave functions
A basis of the Hilbert space of wave functions with respect to a system.
bra
The Hermitian conjugate of a ket is called a bra. . See "bra–ket notation".
Bra–ket notation
The bra–ket notation is a way to represent the states and operators of a system by angle brackets and vertical bars, for example, and .
Density matrix
Physically, the density matrix is a way to represent pure states and mixed states. The density matrix of pure state whose ket is is .
Mathematically, a density matrix has to satisfy the following conditions:
Density operator
Synonymous to "density matrix".
Dirac notation
Synonymous to "bra–ket notation".
Hilbert space
Given a system, the possible pure state can be represented as a vector in a Hilbert space. Each ray (vectors differ by phase and magnitude only) in the corresponding Hilbert space represent a state.
Ket
A wave function expressed in the form is called a ket. See "bra–ket notation".
Mixed state
A mixed state is a statistical ensemble of pure state.
criterion:
Normalizable wave function
A wave function is said to be normalizable if . A normalizable wave function can be made to be normalized by .
Normalized wave function
A wave function is said to be normalized if .
Pure state
A state which can be represented as a wave function / ket in Hilbert space / solution of Schrödinger equation is called pure state. See "mixed state".
Quantum numbers
a way of representing a state by several numbers, which corresponds to a complete set of commuting observables.
A common example of quantum numbers is the possible state of an electron in a central potential: , which corresponds to the eigenstate of observables (in terms of ), (magnitude of angular momentum), (angular momentum in -direction), and .
Spin wave function
Part of a wave function of particle(s). See "total wave function of a particle".
Spinor
Synonymous to "spin wave function".
Spatial wave function
Part of a wave function of particle(s). See "total wave function of a particle".
State
A state is a complete description of the observable properties of a physical system.
Sometimes the word is used as a synonym of "wave function" or "pure state".
State vector
synonymous to "wave function".
Statistical ensemble
A large number of copies of a system.
System
A sufficiently isolated part in the universe for investigation.
Tensor product of Hilbert space
When we are considering the total system as a composite system of two subsystems A and B, the wave functions of the composite system are in a Hilbert space , if the Hilbert space of the wave functions for A and B are and respectively.
Total wave function of a particle
For single-particle system, the total wave function of a particle can be expressed as a product of spatial wave function and the spinor. The total wave functions are in the tensor product space of the Hilbert space of the spatial part (which is spanned by the position eigenstates) and the Hilbert space for the spin.
Wave function
The word "wave function" could mean one of following:
A vector in Hilbert space which can represent a state; synonymous to "ket" or "state vector".
The state vector in a specific basis. It can be seen as a covariant vector in this case.
The state vector in position representation, e.g. , where is the position eigenstate.
Dynamics
Degeneracy
See "degenerate energy level".
Degenerate energy level
If the energy of different state (wave functions which are not scalar multiple of each other) is the same, the energy level is called degenerate.
There is no degeneracy in a 1D system.
Energy spectrum
The energy spectrum refers to the possible energy of a system.
For bound system (bound states), the energy spectrum is discrete; for unbound system (scattering states), the energy spectrum is continuous.
related mathematical topics: Sturm–Liouville equation
Hamiltonian
The operator represents the total energy of the system.
Schrödinger equation
The Schrödinger equation relates the Hamiltonian operator acting on a wave function to its time evolution (Equation ): is sometimes called "Time-Dependent Schrödinger equation" (TDSE).
Time-Independent Schrödinger Equation (TISE)
A modification of the Time-Dependent Schrödinger equation as an eigenvalue problem. The solutions are energy eigenstates of the system (Equation ):
Dynamics related to single particle in a potential / other spatial properties
In this situation, the SE is given by the form It can be derived from (1) by considering and
Bound state
A state is called bound state if its position probability density at infinite tends to zero for all the time. Roughly speaking, we can expect to find the particle(s) in a finite size region with certain probability. More precisely, when , for all .
There is a criterion in terms of energy:
Let be the expectation energy of the state. It is a bound state if and only if .
Position representation and momentum representation
Position representation of a wave function ,
momentum representation of a wave function ;
where is the position eigenstate and the momentum eigenstate respectively.
The two representations are linked by Fourier transform.
Probability amplitude
A probability amplitude is of the form .
Probability current
Having the metaphor of probability density as mass density, then probability current is the current: The probability current and probability density together satisfy the continuity equation:
Probability density
Given the wave function of a particle, is the probability density at position and time . means the probability of finding the particle near .
Scattering state
The wave function of scattering state can be understood as a propagating wave. See also "bound state".
There is a criterion in terms of energy:
Let be the expectation energy of the state. It is a scattering state if and only if .
Square-integrable
Square-integrable is a necessary condition for a function being the position/momentum representation of a wave function of a bound state of the system.
Given the position representation of a state vector of a wave function, square-integrable means:
1D case: .
3D case: .
Stationary state
A stationary state of a bound system is an eigenstate of Hamiltonian operator. Classically, it corresponds to standing wave. It is equivalent to the following things:
an eigenstate of the Hamiltonian operator
an eigenfunction of Time-Independent Schrödinger Equation
a state of definite energy
a state which "every expectation value is constant in time"
a state whose probability density () does not change with respect to time, i.e.
Measurement postulates
Born's rule
The probability of the state collapse to an eigenstate of an observable is given by .
Collapse
"Collapse" means the sudden process which the state of the system will "suddenly" change to an eigenstate of the observable during measurement.
Eigenstates
An eigenstate of an operator is a vector satisfied the eigenvalue equation: , where is a scalar.
Usually, in bra–ket notation, the eigenstate will be represented by its corresponding eigenvalue if the corresponding observable is understood.
Expectation value
The expectation value of the observable M with respect to a state is the average outcome of measuring with respect to an ensemble of state .
can be calculated by:
If the state is given by a density matrix , .
Hermitian operator
An operator satisfying .
Equivalently, for all allowable wave function .
Observable
Mathematically, it is represented by a Hermitian operator.
Indistinguishable particles
Exchange
Intrinsically identical particles
If the intrinsic properties (properties that can be measured but are independent of the quantum state, e.g. charge, total spin, mass) of two particles are the same, they are said to be (intrinsically) identical.
Indistinguishable particles
If a system shows measurable differences when one of its particles is replaced by another particle, these two particles are called distinguishable.
Bosons
Bosons are particles with integer spin (s = 0, 1, 2, ... ). They can either be elementary (like photons) or composite (such as mesons, nuclei or even atoms). There are five known elementary bosons: the four force carrying gauge bosons γ (photon), g (gluon), Z (Z boson) and W (W boson), as well as the Higgs boson.
Fermions
Fermions are particles with half-integer spin (s = 1/2, 3/2, 5/2, ... ). Like bosons, they can be elementary or composite particles. There are two types of elementary fermions: quarks and leptons, which are the main constituents of ordinary matter.
Anti-symmetrization of wave functions
Symmetrization of wave functions
Pauli exclusion principle
Quantum statistical mechanics
Bose–Einstein distribution
Bose–Einstein condensation
Bose–Einstein condensation state (BEC state)
Fermi energy
Fermi–Dirac distribution
Slater determinant
Nonlocality
Entanglement
Bell's inequality
Entangled state
separable state
no-cloning theorem
Rotation: spin/angular momentum
Spin
angular momentum
Clebsch–Gordan coefficients
singlet state and triplet state
Approximation methods
adiabatic approximation
Born–Oppenheimer approximation
WKB approximation
time-dependent perturbation theory
time-independent perturbation theory
Historical Terms / semi-classical treatment
Ehrenfest theorem
A theorem connecting the classical mechanics and result derived from Schrödinger equation.
first quantization
wave–particle duality
Uncategorized terms
uncertainty principle
Canonical commutation relations
The canonical commutation relations are the commutators between canonically conjugate variables. For example, position and momentum :
Path integral
wavenumber
See also
Mathematical formulations of quantum mechanics
List of mathematical topics in quantum theory
List of quantum-mechanical potentials
Introduction to quantum mechanics
Notes
References
Elementary textbooks
Graduate textook
Other
Quantum Mechanics, Glossary Of Elementary
Quantum mechanics
Wikipedia glossaries using description lists | Glossary of elementary quantum mechanics | [
"Physics"
] | 2,222 | [
"Theoretical physics",
"Quantum mechanics"
] |
34,950,733 | https://en.wikipedia.org/wiki/Bochner%27s%20theorem%20%28Riemannian%20geometry%29 | In mathematics, Salomon Bochner proved in 1946 that any Killing vector field of a compact Riemannian manifold with negative Ricci curvature must be zero. Consequently the isometry group of the manifold must be finite.
Discussion
The theorem is a corollary of Bochner's more fundamental result which says that on any connected Riemannian manifold of negative Ricci curvature, the length of a nonzero Killing vector field cannot have a local maximum. In particular, on a closed Riemannian manifold of negative Ricci curvature, every Killing vector field is identically zero. Since the isometry group of a complete Riemannian manifold is a Lie group whose Lie algebra is naturally identified with the vector space of Killing vector fields, it follows that the isometry group is zero-dimensional. Bochner's theorem then follows from the fact that the isometry group of a closed Riemannian manifold is compact.
Bochner's result on Killing vector fields is an application of the maximum principle as follows. As an application of the Ricci commutation identities, the formula
holds for any vector field on a pseudo-Riemannian manifold. As a consequence, there is
In the case that is a Killing vector field, this simplifies to
In the case of a Riemannian metric, the left-hand side is nonpositive at any local maximum of the length of . However, on a Riemannian metric of negative Ricci curvature, the right-hand side is strictly positive wherever is nonzero. So if has a local maximum, then it must be identically zero in a neighborhood. Since Killing vector fields on connected manifolds are uniquely determined from their value and derivative at a single point, it follows that must be identically zero.
Notes
References
Theorems in differential geometry | Bochner's theorem (Riemannian geometry) | [
"Mathematics"
] | 367 | [
"Theorems in differential geometry",
"Theorems in geometry"
] |
34,953,559 | https://en.wikipedia.org/wiki/Sieverts%27%20law | Sieverts' law, in physical metallurgy and in chemistry, is a rule to predict the solubility of gases in metals. It is named after German chemist Adolf Sieverts (1874–1947). The law states that the solubility of a diatomic gas in metal is proportional to the square root of the partial pressure of the gas in thermodynamic equilibrium. Hydrogen, oxygen and nitrogen are examples of dissolved diatomic gases of frequent interest in metallurgy.
Justification
Sieverts' law can be readily rationalized by considering the reaction of dissolution of the gas in the metal, which involves dissociation of the molecule of the gas. For example, for nitrogen:
N2 (molecular gas) 2 N (dissolved atoms)
For the above reaction, the equilibrium constant is
where:
cat is the concentration of the dissolved atoms into the metal (in the case above, atomic nitrogen N),
pmol is the partial pressure of the gas at the interface with the metal (in the case above, the molecular nitrogen N2).
Therefore,
See also
Henry's law
Graham's law
References
Metallurgy
Materials science | Sieverts' law | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 238 | [
"Metallurgy",
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
34,955,091 | https://en.wikipedia.org/wiki/Conley%E2%80%93Zehnder%20theorem | In mathematics, the Conley–Zehnder theorem, named after Charles C. Conley and Eduard Zehnder, provides a lower bound for the number of fixed points of Hamiltonian diffeomorphisms of standard symplectic tori in terms of the topology of the underlying tori. The lower bound is one plus the cup-length of the torus (thus 2n+1, where 2n is the dimension of the considered torus), and it can be strengthen to the rank of the homology of the torus (which is 22n) provided all the fixed points are non-degenerate, this latter condition being generic in the C1-topology.
The theorem was conjectured by Vladimir Arnold, and it was known as the Arnold conjecture on fixed points of symplectomorphisms. Its validity was later extended to more general closed symplectic manifolds by Andreas Floer and several others.
References
Dynamical systems
Fixed points (mathematics)
Theorems in analysis | Conley–Zehnder theorem | [
"Physics",
"Mathematics"
] | 206 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical analysis stubs",
"Fixed points (mathematics)",
"Topology",
"Mechanics",
"Mathematical problems",
"Mathematical theorems",
"Dynamical systems"
] |
34,957,160 | https://en.wikipedia.org/wiki/Rademacher%E2%80%93Menchov%20theorem | In mathematical analysis, the Rademacher–Menchov theorem, introduced by and , gives a sufficient condition for a series of orthogonal functions on an interval to converge almost everywhere.
Statement
If the coefficients cν of a series of bounded orthogonal functions on an interval satisfy
then the series converges almost everywhere.
References
Theorems in analysis | Rademacher–Menchov theorem | [
"Mathematics"
] | 69 | [
"Mathematical analysis",
"Theorems in mathematical analysis",
"Mathematical theorems",
"Mathematical problems"
] |
34,959,071 | https://en.wikipedia.org/wiki/Baer%20group | In mathematics, a Baer group is a group in which every cyclic subgroup is subnormal. Every Baer group is locally nilpotent.
Baer groups are named after Reinhold Baer.
References
Properties of groups | Baer group | [
"Mathematics"
] | 48 | [
"Mathematical structures",
"Algebraic structures",
"Properties of groups"
] |
34,960,097 | https://en.wikipedia.org/wiki/Kris%20Sigurdson | Kris Sigurdson is a Canadian physicist and cosmologist. He is an associate professor in the University of British Columbia's department of physics and astronomy in Vancouver, British Columbia. He was previously a NASA Hubble Fellow and Member of the Institute for Advanced Study. He received a Ph.D. in physics from the California Institute of Technology.
Sigurdson is known for his work on the effects of dark matter interactions on cosmological perturbations, new models of dark matter particle physics, and the potential for observing signatures of the multiverse with cosmology. His other work includes contributions in the physics of the early universe, cosmological perturbation theory, and cosmic 21-cm fluctuations.
In 2010, he co-authored a paper proposing the theory of hylogenesis, a theory of the origin of matter that links the formation of dark matter to baryogenesis. The theory predicts that in the long term protons or neutrons can be destroyed by interactions with dark matter.
References
21st-century Canadian physicists
Canadian cosmologists
Theoretical physicists
Academic staff of the University of British Columbia
California Institute of Technology alumni
Living people
Year of birth missing (living people) | Kris Sigurdson | [
"Physics"
] | 244 | [
"Theoretical physics",
"Theoretical physicists"
] |
26,179,180 | https://en.wikipedia.org/wiki/Activated%20sludge%20model | Activated sludge model is a generic name for a group of mathematical methods to model activated sludge systems. The research in this area is coordinated by a task group of the International Water Association (IWA). Activated sludge models are used in scientific research to study biological processes in hypothetical systems. They can also be applied on full scale wastewater treatment plants for optimisation, when carefully calibrated with reference data for sludge production and nutrients in the effluent.
Around 1983 a task group of the International Association on Water Quality (one of the associations that formed IWA) was formed. They started creating on a generalised framework for mathematical models that could be used to model activated sludge for nitrogen removal. One of the main goals was to develop a model of which the complexity was as low as possible and simple to represent, though still able to accurately predict the biological processes. After four years, the first IAWQ model, named ASM1 was ready and incorporated a basic model taking into account chemical oxygen demand (COD), bacterial growth, and biomass degradation.
An activated sludge model consists of:
state variables: these include the different fractions of COD, biomass and different types of nutrients, both organic and inorganic
a description of the dynamic processes: lists the different biological processes that are modelled, together with their formulae
parameters: variables that describe the circumstances of the biological system, such as growth and decay rate, half-saturation coefficient for hydrolysis, etc.
History
Before work on ASM1 started in 1983, there were already some 15 years of experience in activated sludge modelling, although every research group that worked on mathematical systems of activated sludge created its own model framework, incompatible to all others. ASM1 therefore catalysed the research and had a major impact on activated sludge modelling.
ASM1 was the foundation for numerous extensions. These extensions include for example better prediction of nitrogen and phosphorus removal. Widely used extended models include ASM2, ASM2d, and ASM3P. At the time of publication of the ASM1 model, biological phosphorus removal was already used although this process was not completely understood at that time. Basic knowledge of phosphorus removing bacteria was included in the ASM1 model and parameters were adjusted accordingly. Hence after 8 years, ASM2 was published in 1995.
ASM1 does not include the role of phosphorus accumulating organisms nor the relationship between biological phosphorus removal and removal of nitrogen. An enhanced version of ASM1, simply named ASM2, was developed to include biological and chemical phosphorus removal. As scientific understanding grew in the late 1990s, ASM2 was extended into ASM2d, principally by the addition of anoxic as well as aerobic uptake of phosphorus.
Availability
The IWA activated sludge models are commonly used in existing programs. The most common are Biowin, GPS-X, WEST, STOAT, SIMBA#, SUMO, and ASIM. In addition there are Matlab implementations, Fortran code in the COST 682 Benchmarking report, and Modelica code in the Modelica WasteWater library.
See also
Activated sludge
Bacterial growth
Membrane bioreactors
Michaelis-Menten kinetics
Monod equation
References
External links
Various PhD theses on modelling activated sludge systems
Detailed algorithms for ASM1 and Takacs settling tank model
Sewerage | Activated sludge model | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 686 | [
"Sewerage",
"Environmental engineering",
"Water pollution"
] |
26,180,083 | https://en.wikipedia.org/wiki/Maxwell%E2%80%93Stefan%20diffusion | The Maxwell–Stefan diffusion (or Stefan–Maxwell diffusion) is a model for describing diffusion in multicomponent systems. The equations that describe these transport processes have been developed independently and in parallel by James Clerk Maxwell for dilute gases and Josef Stefan for liquids. The Maxwell–Stefan equation is
∇: vector differential operator
χ: Mole fraction
μ: Chemical potential
a: Activity
i, j: Indexes for component i and j
n: Number of components
: Maxwell–Stefan-diffusion coefficient
: Diffusion velocity of component i
: Molar concentration of component i
c: Total molar concentration
: Flux of component i
The equation assumes steady state, i.e., the neglect of time derivatives in the velocity.
The basic assumption of the theory is that a deviation from equilibrium between the molecular friction and thermodynamic interactions leads to the diffusion flux. The molecular friction between two components is proportional to their difference in speed and their mole fractions. In the simplest case, the gradient of chemical potential is the driving force of diffusion. For complex systems, such as electrolytic solutions, and other drivers, such as a pressure gradient, the equation must be expanded to include additional terms for interactions.
A major disadvantage of the Maxwell–Stefan theory is that the diffusion coefficients, with the exception of the diffusion of dilute gases, do not correspond to the Fick's diffusion coefficients and are therefore not tabulated. Only the diffusion coefficients for the binary and ternary case can be determined with reasonable effort. In a multicomponent system, a set of approximate formulas exist to predict the Maxwell–Stefan-diffusion coefficient.
The Maxwell–Stefan theory is more comprehensive than the "classical" Fick's diffusion theory, as the former does not exclude the possibility of negative diffusion coefficients. It is possible to derive Fick's theory from the Maxwell–Stefan theory.
See also
Advanced Simulation Library
Pervaporation
References
Diffusion
James Clerk Maxwell | Maxwell–Stefan diffusion | [
"Physics",
"Chemistry"
] | 392 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion"
] |
26,183,772 | https://en.wikipedia.org/wiki/Tail%20suspension%20test | The tail suspension test (TST) is an experimental method used in scientific research to measure stress in rodents. It is based on the observation that if a mouse is subjected to short term inescapable stress then the mouse will become immobile. It is used to measure the effectiveness of antidepressant-like agents but there is significant controversy over its interpretation and usefulness.
History
The TST was introduced in 1985 due to the popularity of a similar test called the forced swim test (FST). However this test only recently became popular in the 2000s where data has shown that animals do show a change in behavior when injected with antidepressants. TST is more reliable when done in conjunction with other depression models such as FST, learned helplessness, anhedonia models and olfactory bulbectomy.
Modeling depression
Depression is a complex multi-faceted disorder with symptoms that can have multiple causes such as psychological, behavioral, and genetics. Since there are so many variables it is hard to model in a lab setting. Patients with depression do not always show the same set of symptoms and often present with co-occurring psychiatric conditions.
A major difficulty in modeling depression is that psychiatrists who clinically diagnose depression follow the Diagnostic and Statistical Manual (DSM IV) of the American Psychiatric Association, which involves self-reporting from patients on how they feel. Since animals cannot explain to us how they feel, animals cannot be diagnosed as clinically depressed. While there are theories that animals can experience a condition similar to depression, it is important to keep in mind that depression is, by definition, a human disease. Human and animal brains are considerably different, and care must be taken when interpreting animal behavior and assigning emotional states to various behaviors.
However, there are discrete elements of depression that can be modeled in a lab setting. Stress induced immobilization is a behavior that can be useful in modelling aspects of depression. If a rodent is subjected to the short term inescapable stress of being suspended in the air it will develop an immobile posture. Immobility in the TST can be interpreted as the animal ceasing to put in the effort to try to escape. This is often interpreted as behavioral despair, and could be considered a model of the hopelessness and despair experienced by those with depression.
The main strength of the tail suspension test is its predictive validity– performance on the test can be altered by drugs that improve depressive symptoms in people. Specifically, if antidepressant agents are administered before the test, the animal will struggle for a longer period of time than if not and exhibit more escape behaviors. Thus, it is widely used for assessing the antidepressant effects of new pharmacological compounds.
Procedure
The animal is hung from a tube by its tail for five minutes approximately 10 cm away from the ground. During this time the animal will try to escape and reach for the ground. The time it takes until it remains immobile is measured. Each animal is tested only once and out of view from the other animals. Within the study there should be two sets of rats, one group which is the control which has been injected with saline and the group being tested which has been injected with the antidepressant-like agents.
Controversy
There are mixed opinions about the TST. A common criticism is that it can be weeks before a noticeable effect is observed in patients who take antidepressants regularly, however the TST only measures one acute antidepressant dose for 5–6 minutes.
The TST has predictive reliability for known antidepressant agents. However, when testing drugs of unknown mechanisms, the prediction rate is unclear. While the TST detects NK1 receptor antagonists, which have known antidepressant action, it doesn't detect CRF1 receptor antagonists which also have antidepressant functions.
Some consider the TST to be a test of antidepressant function, rather than a model of depression itself. This is largely because the test measures behavioral response to a short-term stressor, whereas human depression is a long-term condition.
Difference from the forced swim test
TST is more sensitive to antidepressant agents than the FST because the animal will remain immobile longer in the TST than the FST. The FST is not as reliable as the TST because the immobility in the animal could be due to the shock of being dropped in water. This also risks hypothermia. While the mechanisms through which the TST and FST produce stress are unknown it is clear that while overlapping the tests produce immobility through stress differently.
See also
Animal models of depression
Behavioural despair test
Learned helplessness
Open field (animal test)
References
Animal testing techniques
Psychology experiments | Tail suspension test | [
"Chemistry"
] | 975 | [
"Animal testing",
"Animal testing techniques"
] |
26,184,087 | https://en.wikipedia.org/wiki/Synthetic%20substance | A synthetic substance or synthetic compound refers to a substance that is man-made by synthesis, rather than being produced by nature. It also refers to a substance or compound formed under human control by any chemical reaction, either by chemical synthesis (chemosyntesis) or by biosynthesis.
References
Chemical synthesis | Synthetic substance | [
"Chemistry"
] | 65 | [
"Chemical reaction stubs",
"nan",
"Chemical synthesis"
] |
26,184,833 | https://en.wikipedia.org/wiki/Interference%20reflection%20microscopy | Interference reflection microscopy (IRM), also called Reflection Interference Contrast Microscopy (RICM) or Reflection Contrast Microscopy (RCM) depending on the specific optical elements used, is an optical microscopy technique that leverages thin-film interference effects to form an image of an object on a glass surface. The intensity of the signal is a measure of proximity of the object to the glass surface. This technique can be used to study events at the cell membrane without the use of a (fluorescent) label as is the case for TIRF microscopy.
History and name
In 1964, Adam S. G. Curtis coined the term Interference Reflection Microscopy (IRM), using it in the field of cell biology to study embryonic chick heart fibroblasts. He used IRM to look at adhesion sites and distances of fibroblasts, noting that contact with the glass was mostly limited to the cell periphery and the pseudopodia.
In 1975, Johan Sebastiaan Ploem introduced an improvement to IRM (published in a book chapter), which he called Reflection Contrast Microscopy (RCM). The improvement is to use a so-called anti-flex objective and crossed polarizers to further reduce stray light in the optical system. Today, this scheme is mainly referred to as Reflection Interference Contrast Microscopy (RICM), the name of which was introduced by Bareiter-Hahn and Konrad Beck in 1979.
However, the term IRM is sometimes used to describe an RICM setup. The multiplicity of names used to describe the technique has caused some confusion, and was discussed as early as 1985 by Verschueren.
Theory
To form an image of the attached cell, light of a specific wavelength is passed through a polarizer. This linear polarized light is reflected by a beam splitter towards the objective, which focuses the light on the specimen. The glass surface is reflective to a certain degree and will reflect the polarized light. Light that is not reflected by the glass will travel into the cell and be reflected by the cell membrane. Three situations can occur. First, when the membrane is close to the glass, the reflected light from the glass is shifted half of a wavelength, so that light reflected from the membrane will have a phase shift compared to the reflected light from the glass phases and therefore cancel each other out (interference). This interference results in a dark pixel in the final image (the left case in the figure). Second, when the membrane is not attached to the glass, the reflection from the membrane has a smaller phase shift compared to the reflected light from the glass, and therefore they will not cancel each other out, resulting in a bright pixel in the image (the right case in the figure). Third, when there is no specimen, only the reflected light from the glass is detected and will appear as bright pixels in the final image.
The reflected light will travel back to the beam splitter and pass through a second polarizer, which eliminates scattered light, before reaching the detector (usually a CCD camera) in order to form the final picture. The polarizers can increase the efficiency by reducing scattered light; however in a modern setup with a sensitive digital camera, they are not required.
Theory
Reflection is caused by a change in the refraction index, so on every boundary a part of the light will be reflected. The amount of reflection is given by the reflection coefficient , according to the following rule:
Reflectivity is a ratio of the reflected light intensity () and the incoming light intensity ():
Using typical refractive indices for glass (1.50–1.54, see list), water (1.31, see list), the cell membrane (1.48) and the cytosol (1.35), one can calculate the fraction of light being reflected by each interface. The amount of reflection increases as the difference between refractive indices increases, resulting in a large reflection from the interface between the glass surface and the culture medium (about equal to water: 1.31–1.33). This means that without a cell the image will be bright, whereas when the cell is attached, the difference between medium and the membrane causes a large reflection that is slightly shifted in phase, causing interference with the light reflected by the glass. Because the amplitude of the light reflected from the medium-membrane interface is decreased due to scattering, the attached area will appear darker but not completely black. Because the cone of light focused on the sample gives rise to different angles of incident light, there is a broad range of interference patterns. When the patterns differ by less than 1 wavelength (the zero-order fringe), the patterns converge, resulting in increased intensity. This can be obtained by using an objective with a numerical aperture greater than 1.
Requirements
In order to image cells using IRM, a microscope needs at least the following elements: 1) a light source, such as a halogen lamp, 2) an optical filter (which passes a small range of wavelengths), and 3) a beam splitter (which reflects 50% and transmits 50% of the chosen wavelength).
The light source needs to produce high intensity light, as a lot of light will be lost by the beam splitter and the sample itself. Different wavelengths result in different IRM images; Bereiter-Hahn and colleagues showed that for their PtK 2 cells, light with a wavelength of 546 nm resulted in better contrast than blue light with a wavelength of 436 nm. There have been many refinements to the basic theory of IRM, most of which increase the efficiency and yield of the image formation. By placing polarizers and a quarter wave plate between the beam splitter and the specimen, the linear polarized light can be converted into circular polarized light and afterwards be converted back to linear polarized light, which increases the efficiency of the system. The circular polarizer article discusses this process in detail. Furthermore, by including a second polarizer, which is rotated 90° compared to the first polarizer, stray light can be prevented from reaching the detector, increasing the signal to noise ratio (see Figure 2 of Verschueren).
Biological applications
There are several ways IRM can be used to study biological samples. Early examples of uses of the technique focused on cell adhesion and cell migration.
Vesicle fusion
More recently, the technique has been used to study exocytosis in chromaffin cells. When imaged using DIC, chromaffin cells appear as round cells with small protrusions. When the same cell is imaged using IRM, the footprint of the cell on the glass can be clearly seen as a dark area with small protrusions. When vesicles fuse with the membrane, they appear as small light circles within the dark footprint (bright spots in the top cell in the right panel).
An example of vesicle fusion in chromaffin cells using IRM is shown in movie 1. Upon stimulation with 60 mM potassium, multiple bright spots begin to appear inside the dark footprint of the chromaffin cell as a result of exocytosis of dense core granules. Because IRM doesn't require a fluorescent label, it can be combined with other imaging techniques, such as epifluorescence and TIRF microscopy to study protein dynamics together with vesicle exocytosis and endocytosis. Another benefit of the lack of fluorescent labels is reduced phototoxicity.
References
Further reading
External links
Albert Einstein College of Medicine on IRM
Reflected confocal microscopy on Nikon MicroscopyU
Optical microscopy techniques
Articles containing video clips
Microscopy
Microscopes | Interference reflection microscopy | [
"Chemistry",
"Technology",
"Engineering"
] | 1,557 | [
"Microscopes",
"Measuring instruments",
"Microscopy"
] |
41,640,035 | https://en.wikipedia.org/wiki/Insect%20cell%20culture | The use of insect cell lines as production hosts is an emerging technology for the production of bio pharmaceuticals. There are currently more than 100 insect cell lines available for recombinant protein production with lines derived from Bombyx mori, Mamestra brassicae, Spodoptera frugiperda, Trichoplusia ni, and Drosophila melanogaster being of particular interest. Insects cell lines are commonly used in place of prokaryotic ones because post-translational modifications of proteins are possible in insect cells whereas this mechanism is not present in prokaryotic systems. The Sf9 cell line is one of the most commonly used lines in insect cell culture.
References and notes
Cell culture techniques
Biopharmaceuticals | Insect cell culture | [
"Chemistry",
"Biology"
] | 152 | [
"Biochemistry methods",
"Pharmacology",
"Biotechnology products",
"Cell culture techniques",
"Biopharmaceuticals"
] |
41,641,313 | https://en.wikipedia.org/wiki/C8H7NO2 | {{DISPLAYTITLE:C8H7NO2}}
The molecular formula C8H7NO2 (molar mass: 149.15 g/mol, exact mass: 149.0477 u) may refer to:
5,6-Dihydroxyindole
NAPQI, also known as N-acetyl-p-benzoquinone imine or NABPQI
β-Nitrostyrene
Molecular formulas | C8H7NO2 | [
"Physics",
"Chemistry"
] | 96 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
41,644,056 | https://en.wikipedia.org/wiki/Inductive%20programming | Inductive programming (IP) is a special area of automatic programming, covering research from artificial intelligence and programming, which addresses learning of typically declarative (logic or functional) and often recursive programs from incomplete specifications, such as input/output examples or constraints.
Depending on the programming language used, there are several kinds of inductive programming. Inductive functional programming, which uses functional programming languages such as Lisp or Haskell, and most especially inductive logic programming, which uses logic programming languages such as Prolog and other logical representations such as description logics, have been more prominent, but other (programming) language paradigms have also been used, such as constraint programming or probabilistic programming.
Definition
Inductive programming incorporates all approaches which are concerned with learning programs or algorithms from incomplete (formal) specifications. Possible inputs in an IP system are a set of training inputs and corresponding outputs or an output evaluation function, describing the desired behavior of the intended program, traces or action sequences which describe the process of calculating specific outputs, constraints for the program to be induced concerning its time efficiency or its complexity, various kinds of background knowledge such as standard data types, predefined functions to be used, program schemes or templates describing the data flow of the intended program, heuristics for guiding the search for a solution or other biases.
Output of an IP system is a program in some arbitrary programming language containing conditionals and loop or recursive control structures, or any other kind of Turing-complete representation language.
In many applications the output program must be correct with respect to the examples and partial specification, and this leads to the consideration of inductive programming as a special area inside automatic programming or program synthesis, usually opposed to 'deductive' program synthesis, where the specification is usually complete.
In other cases, inductive programming is seen as a more general area where any declarative programming or representation language can be used and we may even have some degree of error in the examples, as in general machine learning, the more specific area of structure mining or the area of symbolic artificial intelligence. A distinctive feature is the number of examples or partial specification needed. Typically, inductive programming techniques can learn from just a few examples.
The diversity of inductive programming usually comes from the applications and the languages that are used: apart from logic programming and functional programming, other programming paradigms and representation languages have been used or suggested in inductive programming, such as functional logic programming, constraint programming, probabilistic programming, abductive logic programming, modal logic, action languages, agent languages and many types of imperative languages.
History
Research on the inductive synthesis of recursive functional programs started in the early 1970s and was brought onto firm theoretical foundations with the seminal THESIS system of Summers and work of Biermann.
These approaches were split into two phases: first, input-output examples are transformed into non-recursive programs (traces) using a small set of basic operators; second, regularities in the traces are searched for and used to fold them into a recursive program. The main results until the mid-1980s are surveyed by Smith. Due to limited progress with respect to the range of programs that could be synthesized, research activities decreased significantly in the next decade.
The advent of logic programming brought a new elan but also a new direction in the early 1980s, especially due to the MIS system of Shapiro eventually spawning the new field of inductive logic programming (ILP). The early works of Plotkin, and his "relative least general generalization (rlgg)", had an enormous impact in inductive logic programming. Most of ILP work addresses a wider class of problems, as the focus is not only on recursive logic programs but on machine learning of symbolic hypotheses from logical representations. However, there were some encouraging results on learning recursive Prolog programs such as quicksort from examples together with suitable background knowledge, for example with GOLEM. But again, after initial success, the community got disappointed by limited progress about the induction of recursive programs with ILP less and less focusing on recursive programs and leaning more and more towards a machine learning setting with applications in relational data mining and knowledge discovery.
In parallel to work in ILP, Koza proposed genetic programming in the early 1990s as a generate-and-test based approach to learning programs. The idea of genetic programming was further developed into the inductive programming system ADATE and the systematic-search-based system MagicHaskeller. Here again, functional programs are learned from sets of positive examples together with an output evaluation (fitness) function which specifies the desired input/output behavior of the program to be learned.
The early work in grammar induction (also known as grammatical inference) is related to inductive programming, as rewriting systems or logic programs can be used to represent production rules. In fact, early works in inductive inference considered grammar induction and Lisp program inference as basically the same problem. The results in terms of learnability were related to classical concepts, such as identification-in-the-limit, as introduced in the seminal work of Gold. More recently, the language learning problem was addressed by the inductive programming community.
In the recent years, the classical approaches have been resumed and advanced with great success. Therefore, the synthesis problem has been reformulated on the background of constructor-based term rewriting systems taking into account modern techniques of functional programming, as well as moderate use of search-based strategies and usage of background knowledge as well as automatic invention of subprograms. Many new and successful applications have recently appeared beyond program synthesis, most especially in the area of data manipulation, programming by example and cognitive modelling (see below).
Other ideas have also been explored with the common characteristic of using declarative languages for the representation of hypotheses. For instance, the use of higher-order features, schemes or structured distances have been advocated for a better handling of recursive data types and structures; abstraction has also been explored as a more powerful approach to cumulative learning and function invention.
One powerful paradigm that has been recently used for the representation of hypotheses in inductive programming (generally in the form of generative models) is probabilistic programming (and related paradigms, such as stochastic logic programs and Bayesian logic programming).
Application areas
The first workshop on Approaches and Applications of Inductive Programming (AAIP) held in conjunction with ICML 2005 identified all applications where "learning of programs or recursive rules are called for, [...] first in the domain of software engineering where structural learning, software assistants and software agents can help to relieve programmers from routine tasks, give programming support for end users, or support of novice programmers and programming tutor systems. Further areas of application are language learning, learning recursive control rules for AI-planning, learning recursive concepts in web-mining or for data-format transformations".
Since then, these and many other areas have shown to be successful application niches for inductive programming, such as end-user programming, the related areas of programming by example and programming by demonstration, and intelligent tutoring systems.
Other areas where inductive inference has been recently applied are knowledge acquisition, artificial general intelligence, reinforcement learning and theory evaluation, and cognitive science in general. There may also be prospective applications in intelligent agents, games, robotics, personalisation, ambient intelligence and human interfaces.
See also
Evolutionary programming
Inductive reasoning
Test-driven development
References
Further reading
https://web.archive.org/web/20040906084947/http://www-ai.ijs.si/SasoDzeroski/ILPBook/
External links
Inductive Programming community page, hosted by the University of Bamberg.
Programming paradigms
Machine learning | Inductive programming | [
"Engineering"
] | 1,628 | [
"Artificial intelligence engineering",
"Machine learning"
] |
46,306,612 | https://en.wikipedia.org/wiki/Fidgetin-like%202 | Fidgetin-like 2 (FL2) is a human enzyme that slows the rate at which skin cells migrate to wounds to heal them. If this enzyme is suppressed/absent, skin cells move faster, speeding the healing process.
Delivery
Molecules of silencing RNA (siRNA) that bind to a gene's messenger RNA (mRNA) can inhibit the production of FL2, but siRNAs require protection from degradation in order to reach a wound site.
In 2015, researchers disclosed the successful use of nanoparticles to ferry siRNA molecules to their intended targets, reducing healing times in mice with skin excisions or burns. The result was normal, well-orchestrated tissue, including hair follicles and supportive collagen network.
References
External links
NCBI 401720
Uniprot A6NMB9
Mouse gene informatics: 3646919
Nanotechnology
Enzymes | Fidgetin-like 2 | [
"Materials_science",
"Engineering"
] | 183 | [
"Nanotechnology",
"Materials science"
] |
46,309,718 | https://en.wikipedia.org/wiki/Tantalum%28V%29%20iodide | Tantalum(V) iodide is the inorganic compound with the formula Ta2I10. Its name comes from the compound's empirical formula, TaI5. It is a diamagnetic, black solid that hydrolyses readily. The compound adopts an edge-shared bioctahedral structure, which means that two TaI5 units are joined by a pair of iodide bridges. There is no bond between the Ta centres. Niobium(V) chloride, niobium(V) bromide, niobium(V) iodide, tantalum(V) chloride, and tantalum(V) bromide all share this structural motif.
Synthesis and structure
Tantalum pentaiodide forms from the reaction of tantalum pentoxide with aluminium triiodide:
3 Ta2O5 + 10 AlI3 → 6 TaI5 + 5 Al2O3
References
Iodides
Tantalum(V) compounds
Metal halides | Tantalum(V) iodide | [
"Chemistry"
] | 204 | [
"Inorganic compounds",
"Metal halides",
"Salts"
] |
4,024,861 | https://en.wikipedia.org/wiki/Transpiration%20stream | In plants, the transpiration stream is the uninterrupted stream of water and solutes which is taken up by the roots and transported via the xylem to the leaves where it evaporates into the air/apoplast-interface of the substomatal cavity. It is driven by capillary action and in some plants by root pressure. The main driving factor is the difference in water potential between the soil and the substomatal cavity caused by transpiration.
Transpiration
Transpiration can be regulated through stomatal closure or opening. It allows for plants to efficiently transport water up to their highest body organs, regulate the temperature of stem and leaves and it allows for upstream signaling such as the dispersal of an apoplastic alkalinization during local oxidative stress.
Summary of water movement:
Soil
Roots and Root Hair
Xylem
Leaves
Stomata
Air
Osmosis
The water passes from the soil to the root by osmosis. The long and thin shape of root hairs maximizes surface area so that more water can enter. There is greater water potential in the soil than in the cytoplasm of the root hair cells. As the cell's surface membrane of the root hair cell is semi-permeable, osmosis can take place; and water passes from the soil to the root hairs.
The next stage in the transpiration stream is water passing into the xylem vessels. The water either goes through the cortex cells (between the root cells and the xylem vessels) or it bypasses them – going through their cell walls.
After this, the water moves up the xylem vessels to the leaves through diffusion: A pressure change between the top and bottom of the vessel. Diffusion takes place because there is a water potential gradient between water in the xylem vessel and the leaf (as water is transpiring out of the leaf). This means that water diffuses up the leaf. There is also a pressure change between the top and bottom of the xylem vessels, due to water loss from the leaves. This reduces the pressure of water at the top of the vessels. This means water moves up the vessels.
The last stage in the transpiration stream is the water moving into the leaves, and then the actual transpiration. First, the water moves into the mesophyll cells from the top of the xylem vessels. Then the water evaporates out of the cells into the spaces between the cells in the leaf. After this, the water leaves the leaf (and the whole plant) by diffusion through stomata.
See also
Soil plant atmosphere continuum for modelling plant transpiration.
References
Felle HH, Herrmann A, Hückelhoven R, Kogel K-H (2005) Root-to-shoot signalling: apoplastic alkalinization, a general stress response and defence factor in barley (Hordeum vulgare). Protoplasma 227, 17 - 24.
Salibury F, Ross C (1991) Plant Physiology. Brooks Cole, pp 682, .
Plant physiology | Transpiration stream | [
"Biology"
] | 636 | [
"Plant physiology",
"Plants"
] |
4,025,205 | https://en.wikipedia.org/wiki/Systemic%20acquired%20resistance | Systemic acquired resistance (SAR) is a "whole-plant" resistance response that occurs following an earlier localized exposure to a pathogen. SAR is analogous to the innate immune system found in animals, and although there are many shared aspects between the two systems, it is thought to be a result of convergent evolution. The systemic acquired resistance response is dependent on the plant hormone, salicylic acid.
Discovery
While, it has been recognized since at least the 1930s that plants have some kind of induced immunity to pathogens, the modern study of systemic acquired resistance began in the 1980s when the invention of new tools allowed scientists to probe the molecular mechanisms of SAR. A number of 'marker genes' were characterized in the 1980s and 1990s which are strongly induced as part of the SAR response. These pathogenesis-related proteins (PR) belong to a number of different protein families. While there is substantial overlap, the spectrum of PR proteins expressed in a particular plant species is variable. It was noticed in the early 1990s that levels of salicylic acid (SA) increased dramatically in tobacco and cucumber upon infection. This pattern has been replicated in many other species since then. Further studies showed that SAR can also be induced by exogenous SA application and that transgenic Arabidopsis plants expressing a bacterial salicylate hydroxylase gene are unable to accumulate SA or mount an appropriate defensive response to a variety of pathogens.
The first plant receptors of conserved microbial signatures were identified in rice (XA21, 1995) and in Arabidopsis (FLS2, 2000).
Mechanism
Plants have several immunity mechanisms to deal with infections and stress. When they are infected with pathogens the immune system recognizes called pathogen-associated molecular patterns (PAMPs), it is via pattern recognition receptors (PRRs). This induces a PAMP-triggered immunity (PTI). Some pathogens carry effectors that suppress PTI in the plant and induce effector triggered susceptibility (ETS). In response, plants evolve resistance (R) genes that encode for proteins capable of recognizing the newly developed pathogen effectors, resulting in what is called effector triggered immunity (ETI). ETI often results in a form of programmed cell death (PCD), called hypersensitive response (HR). Pathogens can then evolve and develop new effectors for overcoming ETI, to which plants can respond by developing new R genes capable of recognizing the pathogen effector, thereby providing a new ETI. When PTI and ETI are activated in the local infected plant tissues, there is a signaling cascade that induces an immune response throughout the whole plant. This "whole plant" immune response is called systemic acquired resistance (SAR). SAR is characterized by accumulation of plant metabolites and genetic reprogramming both locally and systemically (surrounding tissues that were not infected). Salicylic acid (SA) and N-hydroxypipecolic acid (NHP) are two metabolites that have been shown to accumulate during SAR. Plants with reduced or no production of SA and Pip (a precursor to NHP) have been shown to exhibit reduced or no SAR response following infection.
Use in disease control
Unusually, the synthetic fungicide acibenzolar-S-methyl is not directly toxic to pathogens, but rather acts by inducing SAR in the crop plants to which it is applied. It is a propesticide — converted in-vivo into 1,2,3-benzothiadiazole-7-carboxylic acid by methyl salicylate esterase. Field trials have found that acibenzolar-S-methyl (also known as BSA) is effective at controlling some plant diseases, but may have little effect on others, especially fungal pathogens which may not be very susceptible to SAR.
See also
Plant disease resistance
Hypersensitive response
Phytopathology
Plant-induced systemic resistance
References
Further reading
External links
Phytopathology
Plant physiology
Immune system | Systemic acquired resistance | [
"Biology"
] | 823 | [
"Immune system",
"Organ systems",
"Plant physiology",
"Plants"
] |
4,026,007 | https://en.wikipedia.org/wiki/Electroluminescent%20display | Electroluminescent displays (ELDs) are a type of flat panel display created by sandwiching a layer of electroluminescent material such as gallium arsenide between two layers of conductors. When current flows, the layer of material emits radiation in the form of visible light. Electroluminescence (EL) is an optical and electrical phenomenon where a material emits light in response to an electric current passed through it, or to a strong electric field. The term "electroluminescent display" describes displays that use neither LED nor OLED devices, that instead use traditional electroluminescent materials. Beneq is the only manufacturer of TFEL (Thin Film Electroluminescent Display) and TAESL displays, which are branded as LUMINEQ Displays. The structure of a TFEL is similar to that of a passive matrix LCD or OLED display, and TAESL displays are essentially transparent TEFL displays with transparent electrodes. TAESL displays can have a transparency of 80%. Both TEFL and TAESL displays use chip-on-glass technology, which mounts the display driver IC directly on one of the edges of the display. TAESL displays can be embedded onto glass sheets. Unlike LCDs, TFELs are much more rugged and can operate at temperatures from −60 to 105°C and unlike OLEDs, TFELs can operate for 100,000 hours without considerable burn-in, retaining about 85% of their initial brightness. The electroluminescent material is deposited using atomic layer deposition, which is a process that deposits one 1-atom thick layer at a time.
Mechanism
EL works by exciting atoms by passing an electric current through them, causing them to emit photons. By varying the material being excited, the colour of the light emitted can be changed. The actual ELD is constructed using flat, opaque electrode strips running parallel to each other, covered by a layer of electroluminescent material, followed by another layer of electrodes, running perpendicular to the bottom layer. This top layer must be transparent in order to let light escape. At each intersection, the material lights, creating a pixel.
Uses
Electroluminescent (EL) displays have been a niche format and are rarely used nowadays. Some uses have included the Apollo Guidance Computer 7-segment numerical displays, to indicate speed and altitude at the front of the Concorde, and as floor indicators on Otis Elevators from around 1989 to 2007, mostly only available to high-rise buildings and modernizations. EL displays have a wider operating temperature range than LED displays.
Abbreviations
AMEL: Active matrix electroluminescence
TFEL: Thin film electroluminescence
TDEL: Thick dielectric electroluminescence
See also
Electroluminescence
History of display technology
Thick-film dielectric electroluminescent technology
References
Electrical phenomena
Luminescence
Lighting
Display technology | Electroluminescent display | [
"Physics",
"Chemistry",
"Engineering"
] | 586 | [
"Physical phenomena",
"Luminescence",
"Molecular physics",
"Electronic engineering",
"Electrical phenomena",
"Display technology"
] |
4,026,223 | https://en.wikipedia.org/wiki/Persistence%20length | The persistence length is a basic mechanical property quantifying the bending stiffness of a polymer.
The molecule behaves like a flexible elastic rod/beam (beam theory). Informally, for pieces of the polymer that are shorter than the persistence length, the molecule behaves like a rigid rod, while for pieces of the polymer that are much longer than the persistence length, the properties can only be described statistically, like a three-dimensional random walk.
Formally, the persistence length, P, is defined as the length over which correlations in the direction of the tangent are lost. In a more chemical based manner it can also be defined as the average sum of the projections of all bonds j ≥ i on bond i in an infinitely long chain.
Let us define the angle θ between a vector that is tangent to the polymer at position 0 (zero) and a tangent vector at a distance L away from position 0, along the contour of the chain. It can be shown that the expectation value of the cosine of the angle falls off exponentially with distance,
where P is the persistence length and the angled brackets denote the average over all starting positions.
The persistence length is considered to be one half of the Kuhn length, the length of hypothetical segments that the chain can be considered as freely joined. The persistence length equals the average projection of the end-to-end vector on the tangent to the chain contour at a chain end in the limit of infinite chain length.
The persistence length can be also expressed using the bending stiffness , the Young's modulus E and knowing the section of the polymer
chain.
where is the Boltzmann constant and T is the temperature.
In the case of a rigid and uniform rod, I can be expressed as:
where a is the radius.
For charged polymers the persistence length depends on the surrounding salt concentration due to electrostatic screening. The persistence length of a charged polymer is described by the OSF (Odijk, Skolnick and Fixman) model.
Examples
For example, a piece of uncooked spaghetti has a persistence length on the order of m (taking in consideration a Young modulus of 5 GPa and a radius of 1 mm). Double-helical DNA has a persistence length of about 390 ångströms. Such large persistent length for spaghetti does not mean that it is not flexible. It just means that its stiffness is such that it needs m of length for thermal fluctuations at 300K to bend it.
Another example:
Imagine a long cord that is slightly flexible. At short distance scales, the cord will basically be rigid. If you look at the direction the cord is pointing at two points that are very close together, the cord will likely be pointing in the same direction at those two points (i.e. the angles of the tangent vectors are highly correlated). If you choose two points on this flexible cord (imagine a piece of cooked spaghetti that you've just tossed on your plate) that are very far apart, however, the tangent to the cords at those locations will likely be pointing in different directions (i.e. the angles will be uncorrelated). If you plot out how correlated the tangent angles at two different points are as a function of the distance between the two points, you'll get a plot that starts out at 1 (perfect correlation) at a distance of zero and drops exponentially as distance increases. The persistence length is the characteristic length scale of that exponential decay.
For the case of a single molecule of DNA the persistence length can be measured using optical tweezers and atomic force microscopy.
Tools for measurement of persistence length
Persistence length measurement of single stranded DNA is viable by various tools. Most of them have been done by incorporation of the worm-like chain model. For example, two ends of single stranded DNA were tagged by donor and acceptor dyes to measure average end to end distance which is represented as FRET efficiency. It was converted to persistence length by comparing the FRET efficiency with calculated FRET efficiency based on models such as the worm-like chain model. The recent attempts to obtain persistence length is combination of fluorescence correlation spectroscopy (FCS) with HYDRO program. HYDRO program is simply noted as the upgrade of Stokes–Einstein equation. The Stokes–Einstein equation calculates diffusion coefficient (which is inversely proportional to diffusion time) by assuming the molecules as pure sphere. However, the HYDRO program has no limitation regarding to the shape of molecule. For estimation of single stranded DNA persistence length, the diffusion time of number of worm-like chain polymer was generated and its diffusion time is calculated by the HYDRO program which is compared with the experiment diffusion time of FCS. The polymer property was adjusted to find the optimal persistence length.
See also
Polymer
Worm-like chain
Freely jointed chain
Kuhn length
Paul Flory
References
Physical quantities
Polymer physics | Persistence length | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 986 | [
"Polymer physics",
"Physical phenomena",
"Physical quantities",
"Quantity",
"Polymer chemistry",
"Physical properties"
] |
4,027,702 | https://en.wikipedia.org/wiki/Stellar%20drift | Stellar drift, or the motion of stars, is a necessary result of the lack of an absolute reference frame in special relativity.
Nothing in space stands still—more precisely, standing still is meaningless without defining what "still" means. Most galaxies have been moving away ever since the Big Bang, in connection with the expansion of the universe. Galaxy motion is also influenced by galaxy groups and clusters. Stars orbit moving galaxies, and they also orbit moving star clusters and companion stars. Planets orbit their moving stars.
Stellar drift is measured by two components: proper motion (multiplied by distance) and radial velocity. Proper motion is a star's motion across the sky, slowly changing the shapes of constellations over thousands of years. It can be measured using a telescope to detect small movements over long periods of time. Radial velocity is how fast a star approaches or recedes from us. It is measured using redshift. Both components are complicated by the Earth's orbit around the Sun, so the motions of stars are described relative to the Sun, not the Earth (kinematics of stars).
See also
Gravitational wave
Drift
Concepts in stellar astronomy | Stellar drift | [
"Physics",
"Astronomy"
] | 230 | [
"Concepts in astrophysics",
"Astronomy stubs",
"Stellar astronomy stubs",
"Concepts in stellar astronomy",
"Astronomical sub-disciplines",
"Stellar astronomy"
] |
4,027,813 | https://en.wikipedia.org/wiki/Planar%20ternary%20ring | In mathematics, an algebraic structure consisting of a non-empty set and a ternary mapping may be called a ternary system. A planar ternary ring (PTR) or ternary field is special type of ternary system used by Marshall Hall to construct projective planes by means of coordinates. A planar ternary ring is not a ring in the traditional sense, but any field gives a planar ternary ring where the operation is defined by . Thus, we can think of a planar ternary ring as a generalization of a field where the ternary operation takes the place of both addition and multiplication.
There is wide variation in the terminology. Planar ternary rings or ternary fields as defined here have been called by other names in the literature, and the term "planar ternary ring" can mean a variant of the system defined here. The term "ternary ring" often means a planar ternary ring, but it can also simply mean a ternary system.
Definition
A planar ternary ring is a structure where is a set containing at least two distinct elements, called 0 and 1, and is a mapping which satisfies these five axioms:
;
;
, there is a unique such that : ;
, there is a unique , such that ; and
, the equations have a unique solution .
When is finite, the third and fifth axioms are equivalent in the presence of the fourth.
No other pair (0', 1') in can be found such that still satisfies the first two axioms.
Binary operations
Addition
Define . The structure is a loop with identity element 0.
Multiplication
Define . The set is closed under this multiplication. The structure is also a loop, with identity element 1.
Linear PTR
A planar ternary ring is said to be linear if .
For example, the planar ternary ring associated to a quasifield is (by construction) linear.
Connection with projective planes
Given a planar ternary ring , one can construct a projective plane with point set P and line set L as follows: (Note that is an extra symbol not in .)
Let
, and
.
Then define, , the incidence relation in this way:
Every projective plane can be constructed in this way, starting with an appropriate planar ternary ring. However, two nonisomorphic planar ternary rings can lead to the construction of isomorphic projective planes.
Conversely, given any projective plane π, by choosing four points, labelled o, e, u, and v, no three of which lie on the same line, coordinates can be introduced in π so that these special points are given the coordinates: o = (0,0), e = (1,1), v = () and u = (0). The ternary operation is now defined on the coordinate symbols (except ) by y = T(x,a,b) if and only if the point (x,y) lies on the line which joins (a) with (0,b). The axioms defining a projective plane are used to show that this gives a planar ternary ring.
Linearity of the PTR is equivalent to a geometric condition holding in the associated projective plane.
Intuition
The connection between planar ternary rings (PTRs) and two-dimensional geometries, specifically projective and affine geometries, is best described by examining the affine case first. In affine geometry, points on a plane are described using Cartesian coordinates, a method that is applicable even in non-Desarguesian geometries — there, coordinate-components can always be shown to obey the structure of a PTR. By contrast, homogeneous coordinates, typically used in projective geometry, are unavailable in non-Desarguesian contexts. Thus, the simplest analytic way to construct a projective plane is to start with an affine plane and extend it by adding a "line at infinity"; this bypasses homogeneous coordinates.
In an affine plane, when the plane is Desarguesian, lines can be represented in slope-intercept form . This representation extends to non-Desarguesian planes through the ternary operation of a PTR, allowing a line to be expressed as . Lines parallel to the y-axis are expressed by .
We now show how to derive the analytic representation of a general projective plane given at the start of this section. To do so, we pass from the affine plane, represented as , to a representation of the projective plane , by adding a line at infinity. Formally, the projective plane is described as , where represents an affine plane in Cartesian coordinates and includes all finite points, while denotes the line at infinity. Similarly, is expressed as . Here, is an affine line which we give its own Cartesian coordinate system, and consists of a single point not lying on that affine line, which we represent using the symbol .
Related algebraic structures
PTR's which satisfy additional algebraic conditions are given other names. These names are not uniformly applied in the literature. The following listing of names and properties is taken from .
A linear PTR whose additive loop is associative (and thus a group ), is called a cartesian group. In a cartesian group, the mappings
, and
must be permutations whenever . Since cartesian groups are groups under addition, we revert to using a simple "+" for the additive operation.
A quasifield is a cartesian group satisfying the right distributive law:
.
Addition in any quasifield is commutative.
A semifield is a quasifield which also satisfies the left distributive law:
A planar nearfield is a quasifield whose multiplicative loop is associative (and hence a group). Not all nearfields are planar nearfields.
Notes
References
Algebraic structures
Projective geometry | Planar ternary ring | [
"Mathematics"
] | 1,200 | [
"Mathematical structures",
"Mathematical objects",
"Algebraic structures"
] |
4,028,727 | https://en.wikipedia.org/wiki/Inductive%20pump | An Inductive pump is a magnetically regulated positive displacement pump used to pump liquids and gases. It is capable of handling many corrosive chemicals as well as solvents and gases. It is characterized by a single piston that reciprocates within a magnetic field and therefore doesn’t require a dynamic seal to link the piston to an outside mechanical power source. Check valves are placed at both ends of the piston housing allowing the simultaneous suctioning and pumping that reverses with each stroke. This is known to reduce pulsations especially at higher flow rates. The piston and housing are constructed of materials that are inert to many liquids and gasses. Because the piston and housing are non-plastic materials the positive displacement chamber does not change in dimension from flexing and distortion thus allowing inductive pumps to remain very accurate with no significant changes over time. Inductive pumps are extremely accurate as each stroke contains the same volume created by a solid piston inside a solid chamber. The number of strokes can be counted or timed to determine the total volume delivered. They can be used in sterile and controlled environments as they will not leak to the outside of the housing even if the piston has experienced wear.
Efficiency
Inductive pumps are considered highly accurate and energy efficient. Inductive pumps use two primary parameters to control flow, they are Rate and Dwell. Rate is used to determine the number of strokes per second or in any given time interval. Dwell is used to control the length of time the energizing coil remains on during the Rate cycle. Essentially if the piston has completed its stroke and is waiting for the reverse cycle to occur, there is no need to continue energizing the coil as most of this energy will be converted to heat as no more work is being done by the piston. The Dwell setting allows adjustment of this ON time during the rate cycle. Also the Dwell setting allows for a true pressure control parameter for the pump. By reducing the Dwell time even further one can reduce the total energy applied to the piston during the pumping cycle. This can reduce the maximum output pressure during pumping. This differs from many other pumps as they commonly reduce flow to reduce pressure in a given circumstance, however if an occlusion occurs to the output channel other pumps tend to build up to their maximum pressure until they either burst the tubing or damage their internal mechanism. Inductive pumps can be shut off at the outlet and will not exceed the pressure they are set at. Pumping against a closed output does not cause damage to the pump.
History
The Inductive Pump was first patented in the United States by Laurence R. Salamey in 1998 U.S. patent number 5 713 728 and again in 1999 U.S. patent number 5 899 672. An additional patent has been filed for in 2014 by Salamey. The pump was originally designed as an improvement to peristaltic and diaphragm pumps as they were susceptible to fracturing of the pumping chamber with use due to their flexing of plastic parts. Inductive pumps were found to be an improvement to accuracy and length of service before repairs were required. Over time Salamey continued to develop his understanding of magnetic fields and their use for propagation of force with the inductive pump. This has led to further refinements and increased efficiency. Additionally inductive pumps have developed the ability to achieve much higher pressures in excess of 3,000 psi. The same inductive pump technology can be applied to very small pumps delivering volumes in the micro-liter range to much larger pumps delivering volumes in the 10 gallon per minute range. Understanding of magnetic field propagation has led to increased design simplicity which is a hallmark of inductive pumps. There are very few moving parts and no mechanical linkages. The piston is the only moving part aside from the check valves and it is driven by an electrically controlled magnetic field.
Applications
Inductive pumps have been used in many different applications such as the following:
Industrial chemical feed systems
Water Treatment chemical injection process
Oil bearing lubrication of industrial pump and motor bearings (Block and Budris, 2004)
Automotive pumping systems i.e. fuel pumps, vacuum pumps, exhaust treatment pumps etc.
Micro-liter disbursement of flavoring in food manufacturing
High Pressure injection of chemicals into oil and gas transfer lines
Industrial waste water treatment before discharge
Industrial laundry chemical feed systems
Sub-oceanic in situ mass spectroscopy environmental testing
Environmental sampling and chemical treatment dosing
Important design characteristics
Inductive pumps use both sides of the piston to pump and suction simultaneously. This means that both sides of the pump piston are always experiencing the inlet pressure at a minimum until the pressure cycle that would exceed the inlet pressure. This may be interpreted as meaning the net head pressure in a closed circuit, at the beginning of a stroke cycle, is always zero. Therefore, inductive pumps may be used in very high pressure closed circuits to circulate liquids at very low differential pressures. Essentially the inductive pump does not have to overcome the closed system pressure in order to move liquid in the system. This results in far less use of energy to move liquid with the circuit. This also provides additional circulation without any dynamic seals that could eventually leak to the outside of the system.
Additionally inductive pumps may also be connected in series to approximately double the pressure while not increasing the volume. They may also be connected in parallel to approximately double the volume while not increasing the pressure. Most positive displacement pumps cannot increase output pressure when placed in series as they both stop when they reach their max operating pressure. The inductive pumps add to each other due to the zero differential seen on the second pump from the first pump.
Technology
The fundamental basis for induced voltage in a magnetic field comes from Faraday's law describing an induced electromotive force (EMF) as follows:
Emf = -N (∆Φb / ∆t)
(Nave, C. R. 2011).
This states that as the number of magnetic flux lines increase or decrease there is a subsequent change in induced voltage of negative or positive polarity. However the relationship of electric forces and magnetic forces were summed up in the Lorentz Force Law as:
F = qE + qv x B.
Here, all three forces were found to be perpendicular to each other (Nave, a, 2011). Thus Lorentz gave a specially oriented direction to each of the forces, allowing prediction of the direction of forces within the inductive pump architecture. Salamey further investigated the relationship of magnetic flux to circumferential area about the magnetic field where most of the magnetic forces were found to create mechanical forces used to direct the motion of the piston. Salamey further describes, in his second patent, the incorporation of a magnetic field gap. The gap is defined as a region of non-magnetic conduction circumferentially located at either end of the piston bore. The magnetic gap allows for increased propagation of magnetic flux through the magnetic piston body causing an increased force pulling the piston towards the magnetic end-pole (Salamey, 1999).
Efficiencies
Inductive pumps are designed for increased efficiency and were intended to reduce energy consumption in an environment that is increasingly demanding energy conservation. Most electric motors are, on average, about 85% efficient as evidenced by the usual stall test that shows a marked increase in current draw when the motor is stopped mechanically. Inductive pumps show no increase in current draw when stalled during operation as better than 95% of the current is being used to create a force on the piston.
There are very few mechanical losses compared to conventional piston pumps and other technologies because there are no mechanical linkages between the piston and outside power sources. The inductive pump piston is driven directly by the magnetic field formed within the body structure about the bore and within the piston. There are minimal friction losses between the piston and bore due to a circumferential magnetic field that pulls the piston equally in all directions towards the wall of the bore. The resulting force is more axial along the path of the piston creating output pressure. Most other pumps use different types of gear reduction mechanisms to slow the motor rotation when driving the piston. These linkages result in significant energy losses in addition to the inefficiencies of the motor.
Inductive pumps use various proprietary coatings to reduce friction drag and increase efficiency. Specific models of inductive pumps incorporate a seal-less ceramic interface with matching ceramic bore and piston interfaces ground to close tolerances that do not require use of elastic seals. Ceramic interfaces are inert to extremely caustic industrial acids, alkalis, and solvents.
References
Block, H. & Budris, A. (2004) Pump user’s handbook: life extension. Lilburn, GA: The Fairmont Press, Inc.
Nave, C. R. "Faraday's Law". HyperPhysics. Georgia State University. Retrieved 19 August 2014.
Nave, C. R. (a) “Lorentz Force Law” HyperPhysics. Georgia State University. Retrieved 19 August 2014.
Salamey, L. (1999). U S. Patent Not. 5,899,672. Washington, D C: U.S. Patent and Trademark Office.
Whelan, P. M., Hodgeson, M. J., (1978). Essential Principles of Physics (2nd ed.). 1978, John Murray,
Pumps | Inductive pump | [
"Physics",
"Chemistry"
] | 1,924 | [
"Pumps",
"Hydraulics",
"Physical systems",
"Turbomachinery"
] |
4,030,816 | https://en.wikipedia.org/wiki/Resistor%20ladder | A resistor ladder is an electrical circuit made from repeating units of resistors, in specific configurations.
An R–2R ladder configuration is a simple and inexpensive way to perform digital-to-analog conversion (DAC), using repetitive arrangements of precise resistor networks in a ladder-like configuration.
History
A 1953 paper "Coding by Feedback Methods" describes "decoding networks" that convert numbers (in any base) represented by voltage sources or current sources connected to resistor networks in a "shunt resistor decoding network" (which in base 2 corresponds to the binary-weighted configuration) or in a "ladder resistor decoding network" (which in base 2 corresponds to R–2R configuration) into a single voltage output. The paper gives an advantage of R–2R that impedances seen by the sources are more equal.
Another historic description is in US Patent 3108266, filed in 1955, "Signal Conversion Apparatus".
Resistor string network
A string of many resistors connected between two reference voltages is called a "resistor string". The resistors act as voltage dividers between the referenced voltages. A Kelvin divider or string DAC is a string of equal valued resistors.
Analog-to-digital conversion
Each tap of the string generates a different voltage, which can be compared with another voltage: this is the basic principle of a flash ADC (analog-to-digital converter). The main disadvantage is that this architecture requires comparators, one for each resistor; and this number cannot be reduced by using an R-2R network because such a network would not have separate outputs for each voltage.
Digital-to-analog conversion
A resistor string can function as a DAC by having the bits of the binary number control electronic switches connected to each tap.
Binary weighted
The binary weighted configuration uses power of two multiples of a base resistor value. However, as the ratios of resistor values increases, the ability to trim the resistors to accurate ratio tolerances becomes diminished. More accurate ratios can be obtained by using similar values, as is used in R–2R ladder. Hence R–2R provides more accurate digital-to-analog conversion.
R–2R resistor ladder network (digital to analog conversion)
Voltage Mode
A voltage mode R–2R resistor ladder network is shown in Figure 1. Bit an−1 (most significant bit, MSB) through bit a0 (least significant bit, LSB) are driven from digital logic gates. Ideally, the bit inputs are switched between V = 0 (logic 0) and V = Vref (logic 1). The R–2R network causes these digital bits to be weighted in their contribution to the output voltage Vout. Depending on which bits are set to 1 and which to 0, the output voltage (Vout) will have a corresponding stepped value between 0 and Vref minus the value of the minimal step, corresponding to bit 0. The actual value of Vref (and the voltage of logic 0) will depend on the type of technology used to generate the digital signals.
For a digital value VAL, of a R–2R DAC with N bits and 0 V/Vref logic levels, the output voltage Vout is:
For example, if N = 5 (hence 2N = 32) and Vref = 3.3 V (typical CMOS logic 1 voltage), then Vout will vary between 0 volts (VAL = 0 = 000002) and the maximum (VAL = 31 = 111112):
with steps (corresponding to VAL = 1 = 000012)
The R–2R ladder is inexpensive and relatively easy to manufacture, since only two resistor values are required (or even one, if R is made by placing a pair of 2R in parallel, or if 2R is made by placing a pair of R in series). It is fast and has fixed output impedance R. The R–2R ladder operates as a string of current dividers, whose output accuracy is solely dependent on how well each resistor is matched to the others. Small inaccuracies in the MSB resistors can entirely overwhelm the contribution of the LSB resistors. This may result in non-monotonic behavior at major crossings, such as from 011112 to 100002.
Depending on the type of logic gates used and design of the logic circuits, there may be transitional voltage spikes at such major crossings even with perfect resistor values. These can be filtered with capacitance at the output node (the consequent reduction in bandwidth may be significant in some applications). Finally, the 2R resistance is in series with the digital-output impedance. High-output-impedance gates (e.g., LVDS) may be unsuitable in some cases. For all of the above reasons (and doubtless others), this type of DAC tends to be restricted to a relatively small number of bits; although integrated circuits may push the number of bits to 14 or even more, 8 bits or fewer is more typical.
The R–2R DAC described above directly outputs a voltage and so is called voltage mode (or sometimes normal mode).
Current Mode
Since the output impedance is independent of digital code, the analog output may equally-well be taken as a current into a virtual ground, a configuration called current mode (or sometimes inverted mode). Using current mode, the gain of the DAC may be adjusted with a series resistor at the reference voltage terminal. The current for all bits pass through an equivalent resistance of 2R to ground. The less significant the bit, the more resistors its signal must pass through. At each node each bit's current is divided by two.
Accuracy of R–2R resistor ladders
Resistors used with the more significant bits must be proportionally more accurate than those used with the less significant bits; for example, in the R–2R network discussed above, inaccuracies in the bit-4 (MSB) resistors must be insignificant compared to (~3.1%) of R. Further, to avoid problems at the 100002-to-011112 transition, the sum of the inaccuracies in the lower bits must also be significantly less than that. The required accuracy doubles with each additional bit: for 8 bits, the accuracy required will be better than (~0.4%).
However, variances for resistances when manufactured in a single component tend to be much lower than variances between components or between batches of manufacturing, and hence a resistor network can be purchased as a single component. And within integrated circuits, R–2R networks may be printed directly onto a single substrate using thin-film technology for higher accuracy. Even so, they must often be laser-trimmed to achieve the required precision. Such on-chip resistor ladders for digital-to-analog converters achieving 16-bit accuracy have been demonstrated.
Resistor ladder with unequal rungs
It is not necessary that each "rung" of the R–2R ladder use the same resistor values. It is only necessary that the "2R" value matches the sum of the "R" value plus the Thévenin-equivalent resistance of the lower-significance rungs. Figure 2 shows a linear 4-bit DAC with unequal resistors.
This allows a reasonably accurate DAC to be created from a heterogeneous collection of resistors by forming the DAC one bit at a time. At each stage, resistors for the "rung" and "leg" are chosen so that the rung value matches the leg value plus the equivalent resistance of the previous rungs. The rung and leg resistors can be formed by pairing other resistors in series or parallel in order to increase the number of available combinations. This process can be automated.
See also
Logarithmic resistor ladder
Digital-to-analog converter
Covox Speech Thing
Voltage ladder
References
External links
ECE209: DAC Lecture Notes - Ohio State University
EE247: D/A Converters - Berkeley University of California
Simplified DAC/ADC Lecture Notes - University of Michigan
Tutorial MT-014: String DACs and Fully-Decoded DACs - Analog Devices
Tutorial MT-015: Binary DACs - Analog Devices
Tutorial MT-016: Segmented DACs - Analog Devices
Tutorial MT-018: Intentionally Nonlinear DACs - Analog Devices
R2R Resistor Ladder Networks - BI Technologies
R/2R Ladder Networks Application Note - TT Electronics
Analog circuits | Resistor ladder | [
"Engineering"
] | 1,779 | [
"Analog circuits",
"Electronic engineering"
] |
4,031,236 | https://en.wikipedia.org/wiki/T%20puzzle | The T puzzle is a tiling puzzle consisting of four polygonal shapes which can be put together to form a capital T. The four pieces are usually one isosceles right triangle, two right trapezoids and an irregular shaped pentagon.
Despite its apparent simplicity, it is a surprisingly hard puzzle of which the crux is the positioning of the irregular shaped piece. The earliest T puzzles date from around 1900 and were distributed as promotional giveaways. From the 1920s wooden specimen were produced and made available commercially. Most T puzzles come with a leaflet with additional figures to be constructed. Which shapes can be formed depends on the relative proportions of the different pieces.
Origins and early history
The Latin Cross
The Latin cross puzzle consists of a reassembling a five-piece dissection of the cross with three isosceles right triangles, one right trapezoids and an irregular shaped six-sized piece (see figure). When the pieces of the cross puzzle have the right dimensions, they can also be put together as a rectangle. From Chinese origin, the oldest examples date from the first half of the nineteenth century. One of the earliest published descriptions of the puzzle appeared in 1826 in the 'Sequel to the Endless Amusement'. Many other references of the cross puzzle can be found in amusement, puzzle and magicians books throughout the 19th century. The T puzzle is based on the cross puzzle, but without head and has therefore only four pieces. Another difference is that in the dissection of the T, one of the triangles is usually elongated as a right trapezoid. These changes make the puzzle more difficult and clever than the cross puzzle.
Advertising premiums
The T-puzzle became very popular in the beginning of the 20th century as a giveaway item, with hundreds of different companies using it to promote their business or product. The pieces were made from paper or cardboard and served as trade cards, with advertisement printed on them. They usually came in an envelope with instructions and an invitation to write to or call at the company or local dealer for its solution. Examples include:
Lash's Bitters – the original tonic laxative (1898). This is the earliest known version of the T-puzzle. The angles are cut at 35 degrees which makes the puzzle easier and less confusing.
White Rose Ceylon tea, Seeman Brothers, New York (1903). This puzzle is often cited as being the oldest version of the T puzzle, but Lash's Bitters puzzle predates it.
Armour's dry sausage, Armour and Company, Chicago. The text on the envelope reads "The Teaser T, Please accept this interesting little puzzle with our compliments. You will find it a real test to fit the four pieces enclosed in this envelope together to form this perfect letter 'T.' If you fail to solve it, ask your dealer for the solution. And to solve the problem of adding delicious meat dishes to your menu Ask your dealer for Armour's Dry Sausage".
Larabee's best flour (1919).
Waterall's T Puzzle Paints & Varnishes distributed by O.J. Miller & Son, Allentown, Pennsylvania. The envelope mentions that the puzzle is "highly entertaining, interesting, perplexing, aggravating and easy".
Insurance company of Glens Falls T Puzzle, New York.
Early published references
Published references to the T-puzzle appeared in the beginning of the 20th century. In the October 1904 edition of "Primary Education", a monthly journal for primary teachers, the T-puzzle is described as a puzzle for tired children, and they further comment: "Putting the letter on the board will help the wee ones. They say it takes grown-ups ten minutes to fit the pieces. How long will it take the children?" Another early reference is the April 1905 edition of a magazine called "Our Young People". A particular nice presentation of the puzzle appeared in the October 1913 issue of John Martin's Book, here shown to the left.
In "Carpentry & mechanics for boys" by A. Hall (1918), figures of an example T and full-size patterns are given for the construction of a wooden version of the puzzle. The arms of the T are longer than usual. The same drawings appear in "Junior Red cross activities—teachers manual" published in the same year by the American Junior Red Cross. The puzzles presented in this book were proposed to be constructed by red cross juniors for use in the military: "to be used for distribution at canteen centers for the men passing through on the troop trains ... for use in camps, convalescent houses and hospitals" (p. 378). They note that the puzzle "has proven popular with British Tommies" (p. 394) and give detailed instructions on how to fabricate the pieces and an envelope container.
Commercial puzzle
Just the T
The T puzzle remained popular throughout the 20th century and versions of it were sold as a game puzzle as early as the 1920s'. An example dated around that time is a French version of the puzzle called "L'ÉTÉ" produced by N.K. Atlas of Paris. Another example is the wooden version of the puzzle produced by Drueke & Sons, under the name "Pa's T puzzle", dated around the 1940s and here depicted to the right. Later also versions were produced with plastic pieces, such as "Adams T puzzle" by S.S. Adams Co in the 1950s' and "The famous T puzzle" by Marx Toys in the 1960s-1970s. From the 1980s' dates the "Mr T's puzzle" featuring the actor Mr. T from the popular A-Team TV series; the back of the product packaging has the catchphrase "I pity the fool who can't solve Mr. T's puzzle".
Extensions
It was recognized early on that other shapes could be formed with the four pieces of the T puzzle, similar to the tangram. From the 1930 dates an advertising premium for Mohawk Rugs & Carpets which besides the regular T, features the challenge of making an arrowhead with the same pieces. In the same year a giveaway for Eberhard Faber's Van Dyke pencils featured 14 different shapes to form.
At present T puzzles come in standardized proportions which allow the construction of many additional shapes. The most important designs are (see also figure below):
Nob's T puzzle: Designed by Nob Yoshigahara, this version of the T puzzle sold over four million copies. The pieces can be laid out in the shape of a symmetrical convex pentagon with two right angles.
Asymmetric T: This T is asymmetric in that the left and right arm of the T have different lengths, with the shorter arm being about 83% of the longer one. Here all pieces have the same width and can be put in a perfect line segment. At present this puzzle is for instance sold by HIQU and comes with 100 figures to make and by Eureka Toys and Games in a puzzle called brain twister.
Gardner's T: This is the version featured in Martin Gardner's Scientific American column. The pieces also form a fatter T, as noted in a later column. This version was sold under the name "The missing T" as part of Aha! Brain teasers classics from Think Fun.
Solving the puzzle
With only four pieces, the T puzzle is deceitfully simple. Studies have shown that few people are able solve it under five minutes, with most people needing more than half an hour to solve it. A common response of subjects is to conclude that the puzzle is impossible to solve.
The main difficulty in solving the puzzle is overcoming the functional fixedness of putting the pentagon piece either horizontally or vertically; and related to this, the tendency of trying to fill up the notch of the pentagon. In one study participants were found to spend over 60% of their attempts on such misguided placements of the pentagon piece. And even when the pentagon piece happened to be placed properly, it was mostly not recognized as part of the solution, as a match with the T is not easily seen. The puzzle is easily solved when the insight is reached that the pentagon is part of both the horizontal and vertical stem of the T and that the notch in the pentagon constitutes an inside corner.
Notes
References
Tiling puzzles
Geometric dissection
Cognitive tests | T puzzle | [
"Physics",
"Mathematics"
] | 1,699 | [
"Tessellation",
"Recreational mathematics",
"Tiling puzzles",
"Symmetry"
] |
4,031,908 | https://en.wikipedia.org/wiki/Thiele%27s%20interpolation%20formula | In mathematics, Thiele's interpolation formula is a formula that defines a rational function from a finite set of inputs and their function values . The problem of generating a function whose graph passes through a given set of function values is called interpolation. This interpolation formula is named after the Danish mathematician Thorvald N. Thiele. It is expressed as a continued fraction, where ρ represents the reciprocal difference:
Note that the -th level in Thiele's interpolation formula is
while the -th reciprocal difference is defined to be
.
The two terms are different and can not be cancelled.
References
Finite differences
Articles with example ALGOL 68 code
Interpolation | Thiele's interpolation formula | [
"Mathematics"
] | 142 | [
"Mathematical analysis",
"Finite differences",
"Mathematical analysis stubs"
] |
4,031,944 | https://en.wikipedia.org/wiki/Isotope-ratio%20mass%20spectrometry | Isotope-ratio mass spectrometry (IRMS) is a specialization of mass spectrometry, in which mass spectrometric methods are used to measure the relative abundance of isotopes in a given sample.
This technique has two different applications in the earth and environmental sciences. The analysis of 'stable isotopes' is normally concerned with measuring isotopic variations arising from mass-dependent isotopic fractionation in natural systems. On the other hand, radiogenic isotope analysis involves measuring the abundances of decay-products of natural radioactivity, and is used in most long-lived radiometric dating methods.
Introduction
The isotope-ratio mass spectrometer (IRMS) allows the precise measurement of mixtures of naturally occurring isotopes. Most instruments used for precise determination of isotope ratios are of the magnetic sector type. This type of analyzer is superior to the quadrupole type in this field of research for two reasons. First, it can be set up for multiple-collector analysis, and second, it gives high-quality 'peak shapes'. Both of these considerations are important for isotope-ratio analysis at very high precision and accuracy.
The sector-type instrument designed by Alfred Nier was such an advance in mass spectrometer design that this type of instrument is often called the 'Nier type'. In the most general terms the instrument operates by ionizing the sample of interest, accelerating it over a potential in the kilo-volt range, and separating the resulting stream of ions according to their mass-to-charge ratio (m/z). Beams with lighter ions bend at a smaller radius than beams with heavier ions. The current of each ion beam is then measured using a 'Faraday cup' or multiplier detector.
Many radiogenic isotope measurements are made by ionization of a solid source, whereas stable isotope measurements of light elements (e.g. H, C, O) are usually made in an instrument with a gas source. In a "multicollector" instrument, the ion collector typically has an array of Faraday cups, which allows the simultaneous detection of multiple isotopes.
Gas source mass spectrometry
Measurement of natural variations in the abundances of stable isotopes of the same element is normally referred to as stable isotope analysis. This field is of interest because the differences in mass between different isotopes leads to isotope fractionation, causing measurable effects on the isotopic composition of samples, characteristic of their biological or physical history.
As a specific example, the hydrogen isotope deuterium (heavy hydrogen) is almost double the mass of the common hydrogen isotope. Water molecules containing the common hydrogen isotope (and the common oxygen isotope, mass 16) have a mass of 18. Water incorporating a deuterium atom has a mass of 19, over 5% heavier. The energy to vaporise the heavy water molecule is higher than that to vaporize the normal water so isotope fractionation occurs during the process of evaporation. Thus a sample of sea water will exhibit a quite detectable isotopic-ratio difference when compared to Antarctic snowfall.
Samples must be introduced to the mass spectrometer as pure gases, achieved through combustion, gas chromatographic feeds, or chemical trapping. By comparing the detected isotopic ratios to a measured standard, an accurate determination of the isotopic make up of the sample is obtained. For example, carbon isotope ratios are measured relative to the international standard for C. The C standard is produced from a fossil belemnite found in the Peedee Formation, which is a limestone formed in the Cretaceous period in South Carolina, U.S.A. The fossil is referred to as VPDB (Vienna Pee Dee Belemnite) and has 13C:12C ratio of 0.0112372. Oxygen isotope ratios are measured relative the standard, V-SMOW (Vienna Standard Mean Ocean Water).
It is critical that the sample be processed before entering the mass spectrometer so that only a single chemical species enters at a given time. Generally, samples are combusted or pyrolyzed and the desired gas species (usually hydrogen (H2), nitrogen (N2), carbon dioxide (CO2), or sulfur dioxide (SO2)) is purified by means of traps, filters, catalysts and/or chromatography.
The two most common types of IRMS instruments are continuous flow and dual inlet. In dual inlet IRMS, purified gas obtained from a sample is alternated rapidly with a standard gas (of known isotopic composition) by means of a system of valves, so that a number of comparison measurements are made of both gases. In continuous flow IRMS, sample preparation occurs immediately before introduction to the IRMS, and the purified gas produced from the sample is measured just once. The standard gas may be measured before and after the sample or after a series of sample measurements. While continuous-flow IRMS instruments can achieve higher sample throughput and are more convenient to use than dual inlet instruments, the yielded data is of approximately 10-fold lower precision.
Static gas mass spectrometry
A static gas mass spectrometer is one in which a gaseous sample for analysis is fed into the source of the instrument and then left in the source without further supply or pumping throughout the analysis. This method can be used for 'stable isotope' analysis of light gases (as above), but it is particularly used in the isotopic analysis of noble gases (rare or inert gases) for radiometric dating or isotope geochemistry. Important examples are argon–argon dating and helium isotope analysis.
Thermal ionization mass spectrometry
Several of the isotope systems involved in radiometric dating depend on IRMS using thermal ionization of a solid sample loaded into the source of the mass spectrometer (hence thermal ionization mass spectrometry, TIMS). These methods include rubidium–strontium dating, uranium–lead dating, lead–lead dating, potassium-calcium dating, and samarium–neodymium dating.
When these isotope ratios are measured by TIMS, mass-dependent fractionation occurs as species are emitted by the hot filament. Fractionation occurs due to the excitation of the sample and therefore must be corrected for accurate measurement of the isotope ratio.
There are several advantages of the TIMS method. It has a simple design, is less expensive than other mass spectrometers, and produces stable ion emissions. It requires a stable power supply, and is suitable for species with a low ionization potential, such as strontium (Sr), and lead (Pb).
The disadvantages of this method stem from the maximum temperature achieved in thermal ionization. The hot filament reaches a temperature of less than 2500°C, leading to the inability to create atomic ions of species with a high ionization potential, such as osmium (Os), and tungsten (Hf-W). Though the TIMS method can create molecular ions instead in this case, species with high ionization potential can be analyzed more effectively with MC-ICP-MS.
Secondary-ion mass spectrometry
An alternative approach used to measure the relative abundance of radiogenic isotopes when working with a solid surface is secondary-ion mass spectrometry (SIMS). This type of ion-microprobe analysis normally works by focusing a primary (oxygen) ion beam on a sample in order to generate a series of secondary positive ions that can be focused and measured based on their mass/charge ratios.
SIMS is a common method used in U-Pb analysis, as the primary ion beam is used to bombard the surface of a single zircon grain in order to yield a secondary beam of Pb ions. The Pb ions are analyzed using a double focusing mass spectrometer that comprises both an electrostatic and magnetic analyzer. This assembly allows the secondary ions to be focused based on their kinetic energy and mass-charge ratio in order to be accurately collected using a series of Faraday cups.
A major issue that arises in SIMS analysis is the generation of isobaric interference between sputtered molecular ions and the ions of interest. This issue occurs with U–Pb dating as Pb ions have essentially the same mass as HfO2+. In order to overcome this problem, a sensitive high-resolution ion microprobe (SHRIMP) can be used. A SHRIMP is a double-focusing mass spectrometer that allows for a large spatial separation between different ion masses based on its relatively large size. For U-Pb analysis, the SHRIMP allows for the separation of Pb from other interfering molecular ions, such as HfO2+.
Multiple collector inductively coupled plasma mass spectrometry
An MC-ICP-MS instrument is a multiple collector mass spectrometer with a plasma source. MC-ICP-MS was developed to improve the precision achievable by ICP-MS during isotope-ratio measurements. Conventional ICP-MS analysis uses a quadrupole analyser, which only allows single-collector analysis. Due to the inherent instability of the plasma, this limits the precision of ICP-MS with a quadrupole analyzer to around 1%, which is insufficient for most radiogenic isotope systems.
Isotope-ratio analysis for radiometric dating has normally been determined by TIMS. However, some systems (e.g. Hf-W and Lu-Hf) are difficult or impossible to analyse by TIMS, due to the high ionization potential of the elements involved. Therefore, these methods can now be analysed using MC-ICP-MS.
The Ar-ICP produces an ion-beam with a large inherent kinetic energy distribution, which makes the design of the mass-spectrometer somewhat more complex than it is the case for conventional TIMS instruments. First, different from Quadrupole ICP-MS systems, magnetic sector instruments have to operate with a higher acceleration potential (several 1000 V) in order to minimize the energy distribution of the ion beam. Modern instruments operate at 6-10kV.
The radius of deflection of an ion within a magnetic field depends on the kinetic energy and the mass/charge ratio of the ion (strictly, the magnet is a momentum analyzer not just a mass analyzer). Because of the large energy distribution, ions with similar mass/charge ratio can have very different kinetic energies and will thus experience different deflection for the same magnetic field. In practical terms one would see that ions with the same mass/charge ratio focus at different points in space. However, in a mass-spectrometer one wants ions with the same mass/charge ratio to focus at the same point, e.g. where the detector is located. In order to overcome these limitations, commercial MC-ICP-MS are double-focusing instruments. In a double-focusing mass-spectrometer ions are focused due to kinetic energy by the ESA (electro-static-analyzer) and kinetic energy + mass/charge (momentum) in the magnetic field. Magnet and ESA are carefully chosen to match the energy focusing properties of one another and are arranged so that the direction of energy focusing is in opposite directions. To simplify, two components have an energy focus term, when arranged properly, the energy term cancels out and ions with the same mass/charge ratio focus at the same point in space. It is important to note, double-focusing does not reduce the kinetic energy distribution and different kinetic energies are not filtered or homogenized. Double-focusing works for single as well as multi-collector instruments. In single collector instruments ESA and magnet can be arranged in either forward geometry (first ESA then magnet) or reversed geometry (magnet first then ESA), as only point-to-point focusing is required. In multi-collector instruments, only forward geometry (ESA then magnet) is possible due to the array of detectors and the requirements of a focal plane rather than a focal point.
Accelerator mass spectrometry
For isotopes occurring at extremely low levels, accelerator mass spectrometry (AMS) can be used. For example, the decay rate of the radioisotope 14C is widely used to date organic materials, but this approach was once limited to relatively large samples no more than a few thousand years old. AMS extended the range of 14C dating to about 60,000 years BP, and is about 106 times more sensitive than conventional IRMS.
AMS works by accelerating negative ions through a large (mega-volt) potential, followed by charge exchange and acceleration back to ground. During charge exchange, interfering species can be effectively removed. In addition, the high energy of the beam allows the use of energy-loss detectors, that can distinguish between species with the same mass/charge ratio. Together, these processes allow the analysis of extreme isotope ratios above 1012.
Moving wire IRMS
Moving wire IRMS is useful for analyzing carbon-13 ratios of compounds in a solution, such as after purification by liquid chromatography. The solution (or outflow from the chromatography) is dried onto a nickel or stainless steel wire. After the residue is deposited on the wire, it enters a furnace where the sample is converted to CO2 and water by combustion. The gas stream finally enters a capillary, is dried, ionized, and analyzed. This process allows a mixture of compounds to be purified and analyzed continuously, which can decrease the analysis time by a factor of four. Moving wire IRMS is quite sensitive, and samples containing as little as 1 nanomole of carbon can yield precise (within 1‰) results.
See also
Bainbridge mass spectrometer
Isoscape
Isotopomer
Table of nuclides
References
Bibliography
Geochemistry
Mass spectrometry | Isotope-ratio mass spectrometry | [
"Physics",
"Chemistry"
] | 2,840 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"nan",
"Matter"
] |
28,041,875 | https://en.wikipedia.org/wiki/Cellulose%20electrode | A cellulose electrode is a welding electrode that has a coating containing organic materials. About 30% of the coating weight is cellulose.
In some countries, paper pulp and wood powder are added to the coating in certain ratios to reduce the amount of pure cellulose.
The organic compounds in the coating decompose in the arc to form carbon monoxide, carbon dioxide and hydrogen, which increase the arc tension and thus, the welding arc becomes stronger and harder. Compared with other types of electrodes, with the same current values, a 70% deeper penetration can be obtained with cellulose electrodes.
This type of electrode is generally produced with thin or medium coating thicknesses. When the coating is thin, a light amount of slag is formed on the welding bead and the spatter loss is high. On the other hand, the gap filling and vertical down welding capability as well as penetration of the weld obtained by this electrode is good.
Since this electrode can be used in every position (particularly in vertical down), it has a wide range of applications in the ship building industry and in the welding of pipelines with a wall thickness of less than 12.5 mm. The cellulose that burns during welding forms a very good protective gaseous atmosphere.
Application
The main features of cellulose electrodes are as follows:
Deep penetrating welding in every position
Vertical down welding capability
Weld metal with good mechanical properties
A less amount of weld pool is developed
The titanium compounds in the coating provide arc stability as well as help clean the slag easily. Adding a certain amount of ferromanganese to the coating makes it possible to compensate for the manganese that is lost through oxidation during welding and to deoxidize the weld pool. Since these electrodes are generally manufactured using a sodium silicate binder, they can best be used with DC(+) polarity.
References
Electrodes
Cellulose | Cellulose electrode | [
"Chemistry"
] | 394 | [
"Electrochemistry",
"Electrodes"
] |
28,044,471 | https://en.wikipedia.org/wiki/Eu%27Vend | Eu'Vend (official subtitle the international trade fair for the vending industry) is organised every two years by Cologne Trade Fair (Koelnmesse) at the fairground in Cologne - Germany. The conceptual sponsor of the fair is the German Vending Association (bdv), international partner is the European Vending Association, Brussels. The show is open for trade visitors only. Last Eu'Vend took place from 19.-21. September 2013.
The next show will be from 24.-26. September 2015 again. Since 2011 Eu'Vend takes place in conjunction with coffeena (official subtitle "international coffee fair"). In 2013 the Specialty Coffee Association of Europe was the official education partner of coffeena for the first time.
Product segments
The trade fair focuses on
Vending machines (including hot drink vending machines, water cooler, snack vending machines, reverse vending machines, change machines)
Filling products (coffee, tea und cold drinks, food, other filling products)
vending machine cups
Payment systems (cash dispenser, bill validator, cashless payment systems).
Machine accessories/Components and spare parts in and for vending machines (for example water filter).
Services for Operator (including Money counting and sorting machines).
Kiosk Systems.
Operator.
Visitor target groups
Eu'Vend is addressed to Operators but also to people searching for employee or customer catering. Including
Canteens, caterer
Transport authorities, airlines
Bakeries
Schools, universities
Hospitals, homes
Hotels, youth hostels
Tobacco product wholesale
Casinos, gambling halls
Doctors, tax consultants, lawyers
History
Till 2001 the interested companies exhibited their products at Anuga Food Fair
. Than the vending industry decided to launch an own platform. At the first Eu'Vend in 2003
there exhibited 178 companies from 13 countries. Within these three days there came 3.968 trade visitors from 49 countries to Cologne - Germany. In the following years Eu'Vend has grown further. The participants in Eu'Vend 2013 comprised 217 suppliers from 23 countries. These consisted of 126 exhibitors from Germany and 91 exhibitors (42%) from abroad. With approx. 5.000 visitors from 60 countries (34 percent from abroad) Eu'Vend & coffeena is deemed to be the most international vending and coffee fair.
Innovation Award "Vending Star"
Koelnmesse has been granting the innovation award 'Vending Star' since 2007 in conjunction with the German Vending Association (BDV) at Eu'Vend & coffeena Night. The competition aims to optimize the application possibilities and services in the vending field and thus also acts as the instigator for the branch.
The Members of the jury are: Helmut J. Düvel (CA Vending Krugmann GmbH & Co KG), Dr. Aris Kaschefi (BDV), Hans-Jürgen Krone (Lebensmittel Praxis Verlag), Ralf Lang (JAM Verlag), Asim Loncaric (Forum Zeitschriften), Michal Piotrowiak (Mastercup Vending, Polen), Matthias Schlüter (Koelnmesse), Eric Schwaab (Vending Report), Wolfgang Schwarzenberger (Dallmayr Automaten-Service serviPlus), Gerald Steger (café+co International, Österreich), Gerold Stüwer (Stüwer GmbH) und Jan Marck Vrijlandt (Selecta Group).
.
References
Vending machines
Trade fairs in Germany
Economy of Cologne
Recurring events established in 2003
2003 establishments in Germany
Biennial events | Eu'Vend | [
"Engineering"
] | 753 | [
"Vending machines",
"Automation"
] |
21,794,040 | https://en.wikipedia.org/wiki/GNet | GNet is a simple network library. It is written in C, object-oriented, and built upon GLib. It is intended to be small, fast, easy-to-use,
and easy to port. The interface is similar to the interface for Java's network library.
GNet has been ported to Linux, BSD, macOS, Solaris, HP-UX, and Windows. It may work on other flavors of Unix too.
According to the GNet reference below,
GNet is very soon (with the release of GLib 2.22.0) going to be deprecated and replaced by the newly added platform-independent network and socket abstraction layer in GLib/Gio
GNet Features
TCP "client" and "server" sockets.
UDP and IP Multicast sockets.
High-level TCP connection and server objects.
GConnHttp - HTTP connection object.
Asynchronous socket IO.
Internet address abstraction.
Asynchronous DNS lookup.
IPv4 and IPv6 support.
Byte packing and unpacking.
URI parsing.
SHA-1 and MD5 hashes.
Base64 encoding and decoding.
SOCKS support.
Applications that use GNet
eDonkey2000 - eDonkey2000 GTK GUI (DFS) frontend
Gnome Chinese Checkers - board game
Gnome Jabber - instant messaging and chat
gtermix - telnet client for BBSes
Jungle Monkey - distributed file sharing program
Mail Notify - mail notification applet
MSI - multi-simulation interface
Pan - Gnome Newsreader
PreViking - telephony middleware
Sussen - network scanner (GNetLibrary)
Workrave - rest break reminder
References
External links
GNet Official site
GNetWorld
GIO Official site
Free computer libraries | GNet | [
"Technology"
] | 374 | [
"Computing stubs"
] |
21,795,029 | https://en.wikipedia.org/wiki/Retrobright | Retrobright (stylized as retr0bright or Retrobrite) is a hydrogen peroxide-based process for removing yellowing from ABS plastics.
Yellowing in ABS plastic occurs when it is exposed to UV light or excessive heat, which causes photo-oxidation of polymers that breaks polymer chains and causes the plastic to yellow and become brittle.
History
One method of reversing the yellowed discoloration was first discovered in 2007 in a German retrocomputing forum, before spreading to an English blog where it was further detailed. The process has been continually refined since.
Composition
Retrobright consists of hydrogen peroxide, a small amount of the "active oxygen" laundry booster TAED as a catalyst, and a source of UV.
The optimum mixture and conditions for reversing yellowing of plastics:
A hydrogen peroxide solution. Hydrogen peroxide-based hair bleaching creams available at beauty supply stores can also be used, and are viscous, allowing them to be applied with less waste (especially to large pieces such as computer panels or monitors). The cream must be carefully applied and wrapped evenly with plastic wrap to avoid streaks in the final product.
Approximately 1 ml per 3 liters (1 part in 3000 by volume, alternatively teaspoonful per US gallon) of tetraacetylethylenediamine (TAED)-based laundry booster (concentrations of TAED vary).
A source of ultraviolet light, from sunlight or a UV lamp.
Xanthan gum or arrowroot can be added to the solution, creating an easier-to-apply gel.
Alternatives
Sodium percarbonate may also be used by dissolving it in water and following the usual steps for hydrogen peroxide, as it is sodium carbonate and hydrogen peroxide in a crystalline form.
Ozone gas can also be used for retrobrighting, as long as an ozone generator, a suitable container of sufficient size and a source of UV are available, but can take longer than other methods.
A simpler but slower process involving merely exposure of the yellowed plastic to bright sunlight has been described, variously called 'Sunbrighting' or 'Lightbrighting'. This has both empirical evidence of effectiveness and the theoretical backing of some published scientific literature, which emphasises exposure to strong visible light while minimising ultraviolet exposure.
Effectiveness
The long-term effectiveness of these techniques is unclear. Some have discovered the yellowing reappears, and there are concerns that the process weakens and only bleaches the already damaged plastic.
Similar processes
The usage has also expanded to other retro restoration applications, such as classic and collectible sneaker restoration.
References
Cleaning products
Plastics
Hacker culture | Retrobright | [
"Physics",
"Chemistry"
] | 539 | [
"Products of chemical industry",
"Unsolved problems in physics",
"Cleaning products",
"Amorphous solids",
"Plastics"
] |
21,797,771 | https://en.wikipedia.org/wiki/Plumbide | Plumbide is an anion of lead atoms. There are three plumbide anions, written as Pb−, Pb2− and Pb4− with 3 oxidation states, -1, -2 and -4, respectively.
A plumbide can refer to one of two things: an intermetallic compound that contains lead, or a Zintl phase compound with lead as the anion.
Zintl phase
Plumbides can be formed when lead forms a Zintl phase compound with a more metallic element. One salt that can be formed this way is when cryptand reacts with sodium and lead in ethylenediamine (en) to produce [Pb5]2−, which is red in solution.
Lead can also create anions with tin, in a series of anions with the formula [Sn9−xPbx]4−.
Lead can also form the [Pb9]4− anion, which is emerald green in solution.
Examples
An example of a plumbide is CeRhPb. The lead atom has a coordination number of 12 in the crystal structure of this compound. It is bound to four rhodiums, six ceriums, and two other lead atoms in the crystal structure of the chemical.
Several other plumbides are the M2Pd2Pb plumbides, where M is a rare-earth element, and the intermetallic additionally contains a palladium. These plumbides tend to exhibit antiferromagnetism, and all of them are conductors.
A third plumbide is Ti6Pb4.8. Like the above plumbides, it is an intermetallic, but it only contains titanium as the other metal, and not any rare earths.
Plumbides can also be Zintl phase compounds, such as [K(18-crown-6)]2K2Pb9·(en)1.5. This is not a simple Zintl compound, but rather contains the organic molecules 18-crown-6 and ethylenediamine (en) in order to stabilize the crystal structure.
References
Lead compounds
Intermetallics
Anions
Cluster chemistry | Plumbide | [
"Physics",
"Chemistry",
"Materials_science"
] | 459 | [
"Matter",
"Inorganic compounds",
"Anions",
"Cluster chemistry",
"Metallurgy",
"Intermetallics",
"Condensed matter physics",
"Alloys",
"Organometallic chemistry",
"Ions"
] |
21,799,732 | https://en.wikipedia.org/wiki/Widom%20insertion%20method | The Widom insertion method is a statistical thermodynamic approach to the calculation of material and mixture properties. It is named for Benjamin Widom, who derived it in 1963. In general, there are two theoretical approaches to determining the statistical mechanical properties of materials. The first is the direct calculation of the overall partition function of the system, which directly yields the system free energy. The second approach, known as the Widom insertion method, instead derives from calculations centering on one molecule. The Widom insertion method directly yields the chemical potential of one component rather than the system free energy. This approach is most widely applied in molecular computer simulations but has also been applied in the development of analytical statistical mechanical models. The Widom insertion method can be understood as an application of the Jarzynski equality since it measures the excess free energy difference via the average work needed to perform, when changing the system from a state with N molecules to a state with N+1 molecules. Therefore it measures the excess chemical potential since , where .
Overview
As originally formulated by Benjamin Widom in 1963, the approach can be summarized by the equation:
where is called the insertion parameter, is the number density of species , is the activity of species , is the Boltzmann constant, and is temperature, and is the interaction energy of an inserted particle with all other particles in the system. The average is over all possible insertions. This can be understood conceptually as fixing the location of all molecules in the system and then inserting a particle of species at all locations through the system, averaging over a Boltzmann factor in its interaction energy over all of those locations.
Note that in other ensembles like for example in the semi-grand canonical ensemble the Widom insertion method works with modified formulas.
Relation to other thermodynamic quantities
Chemical potential
From the above equation and from the definition of activity, the insertion parameter may be related to the chemical potential by
Equation of state
The pressure-temperature-density relation, or equation of state of a mixture is related to the insertion parameter via
where is the compressibility factor, is the overall number density of the mixture, and is a mole-fraction weighted average over all mixture components:
Hard core model
In the case of a 'hard core' repulsive model in which each molecule or atom consists of a hard core with an infinite repulsive potential, insertions in which two molecules occupy the same space will not contribute to the average. In this case the insertion parameter becomes
where is the probability that the randomly inserted molecule of species will experience an attractive or zero net interaction; in other words, it is the probability that the inserted molecule does not 'overlap' with any other molecules.
Mean field approximation
The above is simplified further via the application of the mean field approximation, which essentially ignores fluctuations and treats all quantities by their average value. Within this framework the insertion factor is given as
Citations
Statistical mechanics | Widom insertion method | [
"Physics"
] | 587 | [
"Statistical mechanics"
] |
44,891,558 | https://en.wikipedia.org/wiki/Phillips%20catalyst | The Phillips catalyst, or the Phillips supported chromium catalyst, is the catalyst used to produce approximately half of the world's polyethylene. A heterogeneous catalyst, it consists of a chromium oxide supported on silica gel. Polyethylene, the most-produced synthetic polymer, is produced industrially by the polymerization of ethylene:
n C2H4 → (C2H4)n
Although exergonic (i.e., thermodynamically favorable), the reaction requires catalysts. Three main catalysts are employed commercially: the Phillips catalyst, Ziegler–Natta catalysts (based on titanium trichloride), and, for specialty polymers, metallocene-based catalysts.
Preparation and mechanism of action
The Phillips catalyst is prepared by impregnating high surface area silica gel with chromium trioxide or related chromium compounds. The solid precatalyst is then calcined in air to give the active catalyst. Only a fraction of the chromium is catalytically active, a fact that interferes with elucidation of the catalytic mechanism. The active catalyst is often depicted as a chromate ester bound to the silica surface. The mechanism for the polymerization process is the subject of much research, the central question being the structure of the active species, which is assumed to be an organochromium compound. Robert L. Banks and J. Paul Hogan, both at Phillips Petroleum, filed the first patents on the Phillips catalyst in 1953. Four years later, the process was commercialized.
References
Industrial processes
Polymer chemistry
Catalysts
Coordination complexes
Chromium(VI) compounds
Chromium–oxygen compounds | Phillips catalyst | [
"Chemistry",
"Materials_science",
"Engineering"
] | 356 | [
"Catalysis",
"Catalysts",
"Coordination complexes",
"Coordination chemistry",
"Materials science",
"Polymer chemistry",
"Chemical kinetics"
] |
36,037,779 | https://en.wikipedia.org/wiki/Drag%20curve | The drag curve or drag polar is the relationship between the drag on an aircraft and other variables, such as lift, the coefficient of lift, angle-of-attack or speed. It may be described by an equation or displayed as a graph (sometimes called a "polar plot"). Drag may be expressed as actual drag or the coefficient of drag.
Drag curves are closely related to other curves which do not show drag, such as the power required/speed curve, or the sink rate/speed curve.
The drag curve
The significant aerodynamic properties of aircraft wings are summarised by two dimensionless quantities, the lift and drag coefficients and . Like other such aerodynamic quantities, they are functions only of the angle of attack , the Reynolds number and the Mach number . and can be plotted against , or can be plotted against each other.
The lift and the drag forces, and , are scaled by the same factor to get and , so = . and are at right angles, with parallel to the free stream velocity (the relative velocity of the surrounding distant air), so the resultant force lies at the same angle to as the line from the origin of the graph to the corresponding , point does to the axis.
If an aerodynamic surface is held at a fixed angle of attack in a wind tunnel, and the magnitude and direction of the resulting force are measured, they can be plotted using polar coordinates. When this measurement is repeated at different angles of attack the drag curve is obtained. Lift and drag data was gathered in this way in the 1880s by Otto Lilienthal and around 1910 by Gustav Eiffel, though not presented in terms of the more recent coefficients. Eiffel was the first to use the name "drag polar", however drag curves are rarely plotted today using polar coordinates.
Depending on the aircraft type, it may be necessary to plot drag curves at different Reynolds and Mach numbers. The design of a fighter will require drag curves for different Mach numbers, whereas gliders, which spend their time either flying slowly in thermals or rapidly between them, may require curves at different Reynolds numbers but are unaffected by compressibility effects. During the evolution of the design the drag curve will be refined. A particular aircraft may have different curves even at the same and values, depending for example on whether undercarriage and flaps are deployed.
The accompanying diagram shows against for a typical light aircraft. The minimum point is at the left-most point on the plot. One component of drag is induced drag (an inevitable side-effect of producing lift, which can be reduced by increasing the indicated airspeed). This is proportional to . The other drag mechanisms, parasitic and wave drag, have both constant components, totalling , and lift-dependent contributions that increase in proportion to . In total, then
The effect of is to shift the curve up the graph; physically this is caused by some vertical asymmetry, such as a cambered wing or a finite angle of incidence, which ensures the minimum drag attitude produces lift and increases the maximum lift-to-drag ratio.
Power required curves
One example of the way the curve is used in the design process is the calculation of the power required () curve, which plots the power needed for steady, level flight over the operating speed range. The forces involved are obtained from the coefficients by multiplication with , where ρ is the density of the atmosphere at the flight altitude, is the wing area and is the speed. In level flight, lift equals weight and thrust equals drag, so
and
.
The extra factor of /η, with η the propeller efficiency, in the second equation enters because = (required thrust)×/η. Power rather than thrust is appropriate for a propeller driven aircraft, since it is roughly independent of speed; jet engines produce constant thrust. Since the weight is constant, the first of these equations determines how falls with increasing speed. Putting these values into the second equation with from the drag curve produces the power curve. The low speed region shows a fall in lift induced drag, through a minimum followed by an increase in profile drag at higher speeds. The minimum power required, at a speed of 195 km/h (121 mph) is about 86 kW (115 hp); 135 kW (181 hp) is required for a maximum speed of 300 km/h (186 mph). Flight at the power minimum will provide maximum endurance; the speed for greatest range is where the tangent to the power curve passes through the origin, about 240 km/h (150 mph).)
If an analytical expression for the curve is available, useful relationships can be developed by differentiation. For example the form above, simplified slightly by putting = 0, has a maximum at = . For a propeller aircraft this is the maximum endurance condition and gives a speed of 185 km/h (115 mph). The corresponding maximum range condition is the maximum of , at = , and so the optimum speed is 244 km/h (152 mph). The effects of the approximation = 0 are less than 5%; of course, with a finite = 0.1, the analytic and graphical methods give the same results.
The low speed region of flight is known as the "back of the power curve" or "behind the power curve" (sometimes "back of the drag curve") where more thrust is required to sustain flight at lower speeds. It is an inefficient region of flight because a decrease in speed requires increased thrust and a resultant increase in fuel consumption. It is regarded as a "speed unstable" region of flight, because unlike normal circumstances, a decrease in speed due to an increased angle of attack from a nose-up pitch control input will not correct itself when the control input ceases. Instead, speed will remain low and drag will progressively accumulate as speed continues to decay, causing the descent rate to increase or climb rate to decrease, and this condition will persist until thrust is increased, angle of attack is reduced (which will shed altitude), or drag is otherwise reduced (such as by retracting the landing gear). Sustained flight behind the power curve requires alert piloting because inadequate thrust will cause a steady decrease in speed and a corresponding steady increase in descent rate, which may go unnoticed and can be difficult to correct at low altitude. A not-infrequent result is the aircraft "mushing" and crashing short of the intended landing site because the pilot did not decrease angle of attack or increase thrust in time, or because adequate thrust is not available; the latter is a particular hazard during a forced landing after an engine failure.
Failure to control airspeed and descent rate while flying behind the power curve has been implicated in a number of prominent aviation accidents, such as Asiana Airlines Flight 214.
Rate of climb
For an aircraft to climb at an angle θ and at speed its engine must be developing more power in excess of power required to balance the drag experienced at that speed in level flight and shown on the power required plot. In level flight = but in the climb there is the additional weight component to include, that is
= + .sin θ = + .
Hence the climb rate .sin θ = . Supposing the 135 kW engine required for a maximum speed at 300 km/h is fitted, the maximum excess power is 135 - 87 = 48 Kw at the minimum of and the rate of climb 2.4 m/s.
Fuel efficiency
For propeller aircraft (including turboprops), maximum range and therefore maximum fuel efficiency is achieved by flying at the speed for maximum lift-to-drag ratio. This is the speed which covers the greatest distance for a given amount of fuel. Maximum endurance (time in the air) is achieved at a lower speed, when drag is minimised.
For jet aircraft, maximum endurance occurs when the lift-to-drag ratio is maximised. Maximum range occurs at a higher speed. This is because jet engines are thrust-producing, not power-producing. Turboprop aircraft do produce some thrust through the turbine exhaust gases, however most of their output is as power through the propeller.
"Long-range cruise" speed (LRC) is typically chosen to give 1% less fuel efficiency than maximum range speed, because this results in a 3-5% increase in speed. However, fuel is not the only marginal cost in airline operations, so the speed for most economical operation (ECON) is chosen based on the cost index (CI), which is the ratio of time cost to fuel cost.
Gliders
Without power, a gliding aircraft has only gravity to propel it. At a glide angle of θ, the weight has two components, at right angles to the flight line and parallel to it. These are balanced by the force and lift components respectively, so
and
.
Dividing one equation by the other shows that the glide angle is given by tan θ = /. The performance characteristics of most interest in unpowered flight are the speed across the ground, say, and the sink speed ; these are displayed by plotting .sin θ = against .cos θ = . Such plots are generally termed polars, and to produce them the glide angle as a function of is required.
One way of finding solutions to the two force equations is to square them both then add together; this shows the possible , values lie on a circle of radius / . When this is plotted on the drag polar, the intersection of the two curves locates the solution and its θ value read off. Alternatively, bearing in mind that glides are usually shallow, the approximation cos θ ≃ 1, good for θ less than 10°, can be used in the lift equation and the value of for a chosen calculated, finding from the drag polar and then calculating θ.
The example polar here shows the gliding performance of the aircraft analysed above, assuming its drag polar is not much altered by the stationary propeller. A straight line from the origin to some point on the curve has a gradient equal to the glide angle at that speed, so the corresponding tangent shows the best glide angle ≃ 3.3°. This is not the lowest rate of sink but provides the greatest range, requiring a speed of 240 km/h (149 mph); the minimum sink rate of about 3.5 m/s is at 180 km/h (112 mph), speeds seen in the previous, powered plots.
Sink rate
A graph showing the sink rate of an aircraft (typically a glider) against its airspeed is known as a polar curve. Polar curves are used to compute the glider's minimum sink speed, best lift over drag (L/D), and speed to fly.
The polar curve of a glider is derived from theoretical calculations, or by measuring the rate of sink at various airspeeds. These data points are then connected by a line to form the curve. Each type of glider has a unique polar curve, and individual gliders vary somewhat depending on the smoothness of the wing, control surface drag, or the presence of bugs, dirt, and rain on the wing. Different glider configurations will have different polar curves, for example, solo versus dual flight, with and without water ballast, different flap settings, or with and without wing-tip extensions.
Knowing the best speed to fly is important in exploiting the performance of a glider. Two of the key measures of a glider’s performance are its minimum sink rate and its best glide ratio, also known as the best "glide angle". These occur at different speeds. Knowing these speeds is important for efficient cross-country flying. In still air the polar curve shows that flying at the minimum sink speed enables the pilot to stay airborne for as long as possible and to climb as quickly as possible, but at this speed the glider will not travel as far as if it flew at the speed for the best glide.
Effect of wind, lift/sink and weight on best glide speed
The best speed to fly in a head wind is determined from the graph by shifting the origin to the right along the horizontal axis by the speed of the headwind, and drawing a new tangent line. This new airspeed will be faster as the headwind increases, but will result in the greatest distance covered. A general rule of thumb is to add half the headwind component to the best L/D for the maximum distance. For a tailwind, the origin is shifted to the left by the speed of the tailwind, and drawing a new tangent line. The tailwind speed to fly will lie between minimum sink and best L/D.
In subsiding air, the polar curve is shifted lower according the airmass sink rate, and a new tangent line drawn. This will show the need to fly faster in subsiding air, which gives the subsiding air less time to lower the glider's altitude. Correspondingly, the polar curve is displaced upwards according to the lift rate, and a new tangent line drawn.
Increased weight does not affect the maximum range of a gliding aircraft. Glide angle is only determined by the lift/drag ratio. Increased weight will require an increased airspeed to maintain the optimum glide angle, so a heavier gliding aircraft will have reduced endurance, because it is descending along the optimum glide path at a faster rate.
For racing, glider pilots will often use water ballast to increase the weight of their glider. This increases the optimum speed, at a cost of low speed performance and a reduced climb rate in thermals. Ballast can also be used to adjust the centre of gravity of the glider, which can improve performance.
See also
Drag coefficient
Lift coefficient
Angle of attack
Lift (force)
Lifting-line theory
External links
Glider Performance Airspeeds – an animated explanation of the basic polar curve, with modifications for sinking or rising air and for head- or tailwinds.
References
Aerodynamics
Aircraft performance
Airspeed
Drag (physics)
de:Polardiagramm (Strömungslehre) | Drag curve | [
"Physics",
"Chemistry",
"Engineering"
] | 2,812 | [
"Drag (physics)",
"Physical quantities",
"Aerodynamics",
"Airspeed",
"Aerospace engineering",
"Wikipedia categories named after physical quantities",
"Fluid dynamics"
] |
36,043,280 | https://en.wikipedia.org/wiki/Cyanogen%20fluoride | Cyanogen fluoride (molecular formula: FCN; IUPAC name: carbononitridic fluoride) is an inorganic linear compound which consists of a fluorine in a single bond with carbon, and a nitrogen in a triple bond with carbon. It is a toxic and explosive gas at room temperature. It is used in organic synthesis and can be produced by pyrolysis of cyanuric fluoride or by fluorination of cyanogen.
Synthesis
Cyanogen fluoride (FCN), is synthesized by the pyrolysis of cyanuric fluoride (C3N3F3) at 1300 °C and 50mm pressure; this process gives a maximum of 50% yield. Other products observed were cyanogen and CF3CN. For pyrolysis, an induction heated carbon tube with an internal diameter of 0.75 inches is packed with 4 to 8 mesh carbon granules and is surrounded by graphite powder insulation and a water-jacketed shell. The cyanuric fluoride is pyrolyzed (becoming a pyrolysate) at a rate of 50g/hr, and appears as fluffy white solid collected in liquid nitrogen traps. These liquid nitrogen traps are filled to atmospheric pressure with nitrogen or helium. This process yields crude cyanogen fluoride, which is then distilled in a glass column at atmospheric pressure to give pure cyanogen fluoride.
Another method of synthesizing cyanogen fluoride is by the fluorination of cyanogen. Nitrogen trifluoride can fluoridate cyanogen to cyanogen fluoride when both the reactants are injected downstream into the nitrogen arc plasma. With carbonyl fluoride and carbon tetrafluoride, FCN was obtained by passing these fluorides through the arc flame and injecting the cyanogen downstream into the arc plasma.
Properties
Cyanogen fluoride (FCN) is a toxic, colorless gas. The linear molecule has a molecular mass of 45.015 gmol−1. Cyanogen fluoride has a boiling point of –46.2 °C and a melting point of –82 °C. The stretching constant for the CN bond was 17.5 mdyn/A and for the CF bond it was 8.07 mdyn/A, but this can vary depending on the interaction constant. At room temperature, the condensed phase converts rapidly to polymeric materials. Liquid FCN explodes at –41 °C when initiated by a squib.
Spectroscopy
The fluorine NMR pattern for FCN showed that there was a triplet peak centered at 80 ppm (3180 cps) with a 32-34 cps splitting between adjacent peaks because of the N14 nucleus. This splitting is absent near freezing point and it collapses to a singlet peak.
The IR spectrum of FCN shows two doublet bands at around 2290 cm−1 (for the C ≡ N)
and 1078 cm−1 (for the C-F). The C-F doublet band has a 24 cm−1 separation between the two branches. A triplet band is observed at around 451 cm−1.
Chemical reactions
Cyanogen fluoride reacts with benzene in the presence of aluminum chloride to form benzonitrile in 20% conversion. It also reacts with olefins to yield an alpha,beta-fluoronitriles. FCN also adds to olefins which have internal double bonds in the presence of strong acid catalyst.
Storage
FCN can be stored in a stainless steel cylinders for over a year when the temperature is -78.5 °C (solid carbon dioxide temperature).
Safety
Cyanogen fluoride undergoes violent reaction when in the presence of boron trifluoride or hydrogen fluoride. Pure gaseous FCN at atmospheric pressure and room temperature does not ignite by a spark or hot wire. FCN air mixtures however are more susceptible to ignition and explosion than pure FCN.
Uses
FCN is useful in synthesis of important compounds such as dyes, fluorescent brighteners and photographic sensitizers. It is also very useful as a fluorinating and nitrilating agent. Beta-fluoronitriles, which are produced when FCN is reacted with olefins, are useful intermediates for preparing polymers, beta-fluorocarboxylic acids and other fluorine containing products. Useful amines can be obtained. Cyanogen fluoride is a very volatile fumigant, disinfectant and animal pest killer.
References
Nonmetal halides
Fluorides
Triatomic molecules
Cyano compounds
Pseudohalogens | Cyanogen fluoride | [
"Physics",
"Chemistry"
] | 969 | [
"Pseudohalogens",
"Inorganic compounds",
"Molecules",
"Salts",
"Triatomic molecules",
"Fluorides",
"Matter"
] |
36,044,328 | https://en.wikipedia.org/wiki/Metal%20hydroxide | In chemistry, metal hydroxides are a family of compounds of the form where M is a metal. They consist of hydroxide () anions and metallic cations, and are often strong bases. Some metal hydroxides, such as alkali metal hydroxides, ionize completely when dissolved. Certain metal hydroxides are weak electrolytes and dissolve only partially in aqueous solution.
Examples
Aluminium hydroxide
Beryllium hydroxide
Cobalt(II) hydroxide
Copper(II) hydroxide
Curium hydroxide
Gold(III) hydroxide
Iron(II) hydroxide
Mercury(II) hydroxide
Nickel(II) hydroxide
Tin(II) hydroxide
Uranyl hydroxide
Zinc hydroxide
Zirconium(IV) hydroxide
Lithium hydroxide
Rubidium hydroxide
Cesium hydroxide
Sodium hydroxide
Potassium hydroxide
Alkali metal hydroxides
Other metal hydroxides
Gallium(III) hydroxide
Lead(II) hydroxide
Thallium(I) hydroxide
Thallium(III) hydroxide
Molecular metal hydroxides
Many metal hydroxides are in fact complexes, i.e. molecules or ions. The transition metal hydroxide complexes are a well developed area in coordination chemistry.
Role in soils
In soils, it is assumed that larger amounts of natural phenols are released from decomposing plant litter rather than from throughfall in any natural plant community. Decomposition of dead plant material causes complex organic compounds to be slowly oxidized (lignin-like humus) or to break down into simpler forms (sugars and amino sugars, aliphatic and phenolic organic acids), which are further transformed into microbial biomass (microbial humus) or are reorganized, and further oxidized, into humic assemblages (fulvic and humic acids), which bind to clay minerals and metal hydroxides.
References
Metals
Hydroxides | Metal hydroxide | [
"Chemistry"
] | 395 | [
"Metals",
"Bases (chemistry)",
"Hydroxides"
] |
36,044,548 | https://en.wikipedia.org/wiki/Tompkins%E2%80%93Paige%20algorithm | The Tompkins–Paige algorithm is a computer algorithm for generating all permutations of a finite set of objects.
The method
Let P and c be arrays of length n with 1-based indexing (i.e. the first entry of an array has index 1). The algorithm for generating all n! permutations of the set {1, 2, ..., n} is given by the following pseudocode:
P ← [1, 2, ..., n];
yield P;
c ← [*, 1, ..., 1]; (the first entry of c is not used)
i ← 2;
while i ≤ n do
left-rotate the first i entries of P;
(e.g. left-rotating the first 4 entries of
[4, 2, 5, 3, 1] would give [2, 5, 3, 4, 1])
if c[i] < i then
c[i] ← c[i] + 1;
i ← 2;
yield P;
else
c[i] ← 1;
i ← i+1;
In the above pseudocode, the statement "yield P" means to output or record the set of permuted indices P. If the algorithm is implemented correctly, P will be yielded exactly n! times, each with a different set of permuted indices.
This algorithm is not the most efficient one among all existing permutation generation methods. Not only does it have to keep track of an auxiliary counting array (c), redundant permutations are also produced and ignored (because P is not yielded after left-rotation if c[i] ≥ i) in the course of generation. For instance, when n = 4, the algorithm will first yield P = [1,2,3,4] and then generate the other 23 permutations in 40 iterations (i.e. in 17 iterations, there are redundant permutations and P is not yielded). The following lists, in the order of generation, all 41 values of P, where the parenthesized ones are redundant:
P = 1234 c = *111 i=2
P = 2134 c = *211 i=2
P = (1234) c = *111 i=3
P = 2314 c = *121 i=2
P = 3214 c = *221 i=2
P = (2314) c = *121 i=3
P = 3124 c = *131 i=2
P = 1324 c = *231 i=2
P = (3124) c = *131 i=3
P = (1234) c = *111 i=4
P = 2341 c = *112 i=2
P = 3241 c = *212 i=2
P = (2341) c = *112 i=3
P = 3421 c = *122 i=2
P = 4321 c = *222 i=2
P = (3421) c = *122 i=3
P = 4231 c = *132 i=2
P = 2431 c = *232 i=2
P = (4231) c = *132 i=3
P = (2341) c = *112 i=4
P = 3412 c = *113 i=2
P = 4312 c = *213 i=2
P = (3412) c = *113 i=3
P = 4132 c = *123 i=2
P = 1432 c = *223 i=2
P = (4132) c = *123 i=3
P = 1342 c = *133 i=2
P = 3142 c = *233 i=2
P = (1342) c = *133 i=3
P = (3412) c = *113 i=4
P = 4123 c = *114 i=2
P = 1423 c = *214 i=2
P = (4123) c = *114 i=3
P = 1243 c = *124 i=2
P = 2143 c = *224 i=2
P = (1243) c = *124 i=3
P = 2413 c = *134 i=2
P = 4213 c = *234 i=2
P = (2413) c = *134 i=3
P = (4123) c = *114 i=4
P = (1234) c = *111 i=5
References
Combinatorial algorithms
Permutations | Tompkins–Paige algorithm | [
"Mathematics"
] | 927 | [
"Combinatorial algorithms",
"Functions and mappings",
"Permutations",
"Mathematical objects",
"Computational mathematics",
"Combinatorics",
"Mathematical relations"
] |
36,045,116 | https://en.wikipedia.org/wiki/Diphosphorus%20trisulfide | Diphosphorus trisulfide (sometimes called phosphorus trisulfide) is a phosphorus sulfide with the formula of . The substance is highly unstable and difficult to study. In contrast, the formal dimer P4S6 is well-known.
History
Early reports that diphosphorous trisulfide could be formed by heating red phosphorus and sulfur were shown to be incorrect by Helff in 1893. Its existence was again reported by Ralston and Wilkinson in 1928. In 1959, Pitochelli and Audrieth showed that the substance existed by X-ray diffraction but did not succeed in fully isolating it. In 1997, Lohr and Sundholm published a theoretical analysis of the potential structures of this molecular substance.
In 2017, Xiao proposed that a 2D crystallisation of was possible based on computer simulations. Xiao suggested that nanoribbons and nanotubes of the material may have applications in semiconductor electronics.
Properties
is highly flammable. The solid may spontaneously ignite with moist air or in contact with water. Produces phosphoric acid and hydrogen sulfide, a toxic flammable gas in reaction with water. is a strong reducing agent. Reacts vigorously with oxidizing agents, including inorganic oxoacids, organic peroxides and epoxides. Produce acidic and corrosive phosphorus pentoxide and sulfur dioxide when burned.
References
Bibliography
Lohr, Lawrence L.; Sundholm, Dage, "An ab initio characterization of diphosphorus trisulfide, ", Journal of Molecular Structure, vol. 413–414, pp. 495–500, 30 September 1997.
Pitochelli, A.R.; Audrieth, L.F., "Concerning the existence of diphosphorus trisulfide", Journal of the American Chemical Society, vol. 81, iss. 17, pp. 4458–4460, 1 September 1959.
Ralston, A.W.; Wilkinson, J.A., "Reactions in liquid hydrogen sulfide. III thiohydrolysis of chlorides", Journal of the American Chemical Society, vol. 50, iss. 2, pp. 258–264, 1 February 1928.
Xiao, Hang, Low-Dimensional Material: Structure-Property Relationship and Applications in Energy and Environmental Engineering (PhD Dissertation), Columbia University ProQuest Dissertations Publishing, no. 10615524, 2017.
Phosphorus compounds
Sulfides | Diphosphorus trisulfide | [
"Chemistry"
] | 508 | [
"Inorganic phosphorus compounds",
"Inorganic compounds"
] |
36,045,138 | https://en.wikipedia.org/wiki/Chamber%20of%20Computer%20Engineers%20of%20Turkey | Chamber of Computer Engineers of Turkey (, abbreviated BMO) was founded on 2 June 2012.
Formerly, the computer engineers in Turkey were the members of Chamber of Electrical Engineers of Turkey. But, on 9 March 2011 computer engineers decided to form their own chamber. The regulatory board announced that each year about 6,500 new CS engineers (including related undergraduate studies) graduate from the universities. During the general assembly of Union of chambers of Turkish engineers and architects (UCTEA) on the 2 June 2012, the request was approved. The chamber has become the 24th member of the union - UCTEA.
References
Engineering societies based in Turkey
2012 establishments in Turkey
Organizations established in 2012
Computer engineering | Chamber of Computer Engineers of Turkey | [
"Technology",
"Engineering"
] | 137 | [
"Electrical engineering",
"Computer engineering"
] |
40,214,539 | https://en.wikipedia.org/wiki/Hydroalkoxylation | Hydroalkoxylation is a chemical reaction that combines alcohols with alkenes or alkynes. The process affords ethers.
The reaction converts alkenes to dialkyl or aryl-alkyl ethers:
R'OH + RCH=CH2 → R'OCH(R)-CH3
Similarly, alkynes are converted to vinyl ethers:
R'OH + RC≡CH → R'OC(R)=CH2
As shown, the reaction follows the Markovnikov rule. The process exhibits good atom-economy in the sense that no byproducts are produced. The reaction is catalyzed by bases and also by transition metal complexes. Usually symmetrical ethers are prepared by dehydration of alcohols and unsymmetrical ethers by the Williamson ether synthesis from alkyl halides and alkali metal alkoxides.
See also
Hydroamination
Hydrofunctionalization
References
Addition reactions
Homogeneous catalysis
Catalysis | Hydroalkoxylation | [
"Chemistry"
] | 209 | [
"Catalysis",
"Homogeneous catalysis",
"Chemical reaction stubs",
"Chemical kinetics",
"Chemical process stubs"
] |
40,217,748 | https://en.wikipedia.org/wiki/Alliance%20for%20Bangladesh%20Worker%20Safety | The Alliance for Bangladesh Worker Safety, also known as "the Alliance" or AFBWS, is a group of 28 major global retailers formed to develop and launch the Bangladesh Worker Safety Initiative, a binding, five-year undertaking with the intent of improving safety in Bangladeshi ready-made garment (RMG) factories after the 2013 Rana Plaza building collapse. Collectively, Alliance members represent the majority of North American imports of ready-made garments from Bangladesh, produced in more than 700 factories.
Background
After the 2013 Savar building collapse, Walmart became a founding member of the Alliance for Bangladesh Worker Safety. Monsoon was a member of the Ethical Trading Initiative (ETI) from before the 2013 Savar building collapse due to structural integrity and failure.
The building had housed a number of separate garment factories employing around 5,000 people, several shops, and a bank. The factories manufactured apparel for brands including Benetton, Monsoon Accessorize, Bonmarché, the Children's Place, El Corte Inglés, Joe Fresh, Mango, Matalan, Primark, and Walmart.
Formation of the Alliance
The Alliance was organized through the U.S. Bipartisan Policy Center (BPC) with discussions convened and chaired by former U.S. Senate Majority Leader George J. Mitchell (D-ME) and former U.S. Senator Olympia Snowe (R-ME). The collaborative formation process involved apparel industry companies and stakeholders, including the U.S. and Bangladeshi governments, policymakers, international NGOs, and members of civil society and organized labor in Bangladesh. On July 10, 2013, the group announced the Bangladesh Worker Safety Initiative. The Initiative is a binding, five-year plan focused on fire and building safety inspections, worker training, and worker empowerment.
The Alliance for Bangladesh Worker Safety has elected former US Representative Ellen Tauscher as independent chairperson.
The company responsible for managing the Alliance is ELEVATE under CEO Ian Spaulding. http://www.elevatelimited.com
Key Alliance documents
Alliance Action Plan
Alliance Timeline
Member Agreement for companies
Alliance Bylaws
Alliance Statistics - Provides key statistics regarding the status of factory inspections, remediation, training and other worker empowerment initiatives.
Alliance work in Bangladesh
Factory inspections and public reporting
The Alliance helped develop a common Fire Safety and Structural Integrity Standard – founded on the requirements of the 2006 Bangladesh National Building Code (BNBC), though the Standard exceeds those requirements in some cases. The Standard was developed and is being implemented to ensure that all Member factories are held to the same safety requirements. The Standard was developed by technical experts from both the Alliance and the Accord on Fire and Building Safety in Bangladesh, and finalized in December 2013. The Standard has been harmonized with the guidelines developed by the Bangladesh University of Engineering and Technology (BUET) for the National Tripartite Plan of Action (NTPA).
The Alliance has retained a committee of independent fire and structural safety experts from Bangladesh, Europe and North America who are credentialed authorities in fire or building structural safety. The Committee of Experts (COE) is responsible for overseeing the implementation of the Alliance Standard, which includes approving qualified inspectors, conducting spot audits of remediation efforts and validating inspection reports. The Alliance has publicly announced that it aims to inspect all factories producing for its members by July 10, 2014.
The Alliance is working with the Fair Factories Clearinghouse, a non-profit organization that provides software to facilitate information-sharing. Alliance Members use this public platform to provide and exchange information about factories they use, fire and building safety training programs and curricula, and submit monthly reports on safety inspections and progress update on remediation plans being undertaken.
The Standard is used to evaluate all Alliance factories. The Board of Directors issues semi-annual public reports detailing its work and progress toward meeting in-country fire and building safety objectives, as well as training and worker empowerment goals.
In early September, 2015 the Alliance has recognized six RMG factories of Bangladesh to international standard as they finished all reformation works. These factories are; Green Textile, Kwun Tong Apparel, Laundry Industries, Lenny Apparel, Optimum Fashions, and Univogue.
Worker participation
In November and December 2013, the Alliance conducted a Worker Baseline Survey among more than 3,200 workers in 28 garment factories in Bangladesh. 10 focus groups were conducted off-site with 101 participants in three Bangladeshi regions to obtain more information on fire and other health and safety issues.
The purpose of the survey and the off-site interviews was to better understand the current level of awareness of health and safety risks and what workers believe needs to be done to improve safety and reduce risk in the factories. It is also a tool that will inform the necessary detail of our training programs.
The Alliance's worker helpline and education program will be implemented by three worker empowerment-focused, organizations, working in partnership. Clear Voice, the organization that provides tools for communication with workers, was founded by an early worker rights and human rights pioneer, Doug Cahn. Clear Voice will partner with Phulki, one of Bangladesh's leading worker rights non-governmental organizations (NGO), and Good World Solutions, whose focus will be on applying its Labor Link technology to train workers on their rights and survey them on their wellbeing.
Beginning March 2014, the helpline program will be piloted in 50 factories Dhaka and Chittagong, with in-factory orientations to accompany the launch at each location. Helplines will roll out to 100 factories by March 2015, with the goal of becoming functional in all Alliance factories by 2017.
Local operations
On December 9, 2013, the Alliance opened an office in Dhaka, Bangladesh, where the Alliance is focused on inspection implementation, development of a worker training curriculum, establishing a worker empowerment helpline, and building local capacity for completing factory improvements. As of February 2014, the Dhaka office serves as the primary hub for staff and organizational activities. All staff members in the Dhaka office are Bangladeshi nationals who bring decades of combined experience in Bangladesh's garment industry. As of March 2014, the team includes a managing director, managers for fire and structural safety, assessments, training, worker outreach and empowerment, factory liaison and remediation, as well as other support staff.
Alliance governance
The board of directors is entrusted with oversight responsibility for Alliance Members' compliance with Initiative requirements, such as meeting financial obligations and self-imposed deadlines for achieving inspections, information-sharing and worker training agreements.
The Board has the authority to investigate possible non-compliance, and take appropriate action against delinquent companies, by a two-thirds majority vote, including termination of membership in the Alliance.
Board of directors
Ellen O'Kane Tauscher - Chairperson
Ambassador James Moriarty, Former U.S. Ambassador to Bangladesh
Wilma Wallace, Vice President Global Responsibility, Business & Human Rights, Gap Inc.
Irene Quarshie, Vice President of Product Safety Quality Assurance & Social Compliance, Target Corporation
Jan Saumweber, Vice President of Responsible Sourcing, Wal-Mart Stores, Inc.
Randy Tucker, Principle, Tucker Consulting Associates
Tom Nelson, Vice President for Global Product Procurement, VF Corporation
Tapan Chowdhury, Founder, Square Textiles Limited
Simone Sultana, chair, BRAC UK
References
2013 in Bangladesh
Fire protection
Clothing industry
Human rights organisations based in Bangladesh
Textile industry of Bangladesh
Working conditions
Occupational safety and health | Alliance for Bangladesh Worker Safety | [
"Engineering"
] | 1,492 | [
"Building engineering",
"Fire protection"
] |
30,864,060 | https://en.wikipedia.org/wiki/Pom-pom | A pom-pom – also spelled pom-pon, pompom or pompon – is a decorative ball or tuft of fibrous material.
The term may refer to large tufts used by cheerleaders, or a small, tighter ball attached to the top of a hat, also known as a bobble or toorie.
Pom-poms may come in many colours, sizes, and varieties and are made from a wide array of materials, including wool, cotton, paper, plastic, thread, glitter and occasionally feathers. Pom-poms are shaken by cheerleaders, pom or dance teams, and sports fans during spectator sports.
Etymology and spelling
Pom-pom, also called a pom or cheerleading pom, is derived from the French word pompon, which refers to a small decorative ball made of fabric or feathers. It also means an "ornamental round tuft" and originally refers to its use on a hat, or an "ornamental tuft; tuft-like flower head."
Webster's Third New International Dictionary (1961) gives the spelling as "pompon."
The New Oxford American Dictionary (third edition, 2010) gives the spelling as "pom-pom."
The American Heritage Dictionary of the English Language (5th edition, 2011) gives the spelling as "pompom" or "pompon."
Webster's New World College Dictionary (fourth edition) gives the spelling as "pompom."
Sports and cheerleading
Cheerleading innovator Lawrence Herkimer received a patent for the pom-pom and his original patent application, for which he called the invention pom-pon, mentioned that they were made out of crepe paper or other similar material. Since then pom-poms have been made of plastic but mylar (also called BoPET) has become increasingly popular in recent years.
Cheerleading pom-poms come in a variety of shapes, styles, colors, color combinations, and sizes. The most common size, the works most age groups or performance type. This size can be used for dance teams, pom squads, cheerleaders, and majorettes, easily making it the most versatile strand length on the market. The second most common size, the , is adequate for any age group or performance type, but the marginally shorter strands provide the necessary flash while acting more as an accent to the uniform.
Pom-poms are also waved by sports fans, primarily at college and high school sports events in the United States. These inexpensive, light-weight faux pom-poms, or rooter poms, typically come in team colors, are sometimes given away or sold to spectators at such events.
Clothing
Toorie
In reference to Scottish Highland dress and Scottish military uniforms, the small pom-pom on the crown of such hats as the Balmoral, the Glengarry, and the Tam o' Shanter is called a "toorie."
The toorie is generally made of yarn and is traditionally red on both Balmorals and Glengarries (although specific units have used other colours). It has evolved into the smaller pom-pom found on older-style golf caps and the button atop baseball caps.
The word toorie is used for any such hat decoration in the Scots language, irrespective of the headgear.
Toys and bicycles
Pom-poms are sometimes used as children's toys. They are a common feature at the ends of the handlebars of children's tricycles and bicycles. They are also used in children's artistic crafts to add texture and color.
Gallery
References
Cheerleading
Parts of clothing
Dance props
Textile arts | Pom-pom | [
"Technology"
] | 766 | [
"Components",
"Parts of clothing"
] |
30,864,235 | https://en.wikipedia.org/wiki/TEMPO | (2,2,6,6-Tetramethylpiperidin-1-yl)oxyl or (2,2,6,6-tetramethylpiperidin-1-yl)oxidanyl, commonly known as TEMPO, is a chemical compound with the formula (CH2)3(CMe2)2NO. This heterocyclic compound is a red-orange, sublimable solid. As a stable aminoxyl radical, it has applications in chemistry and biochemistry. TEMPO is used as a radical marker, as a structural probe for biological systems in conjunction with electron spin resonance spectroscopy, as a reagent in organic synthesis, and as a mediator in controlled radical polymerization.
Preparation
TEMPO was discovered by Lebedev and Kazarnowskii in 1960. It is prepared by oxidation of 2,2,6,6-tetramethylpiperidine.
Structure and bonding
The structure has been confirmed by X-ray crystallography. The reactive radical is well shielded by the four methyl groups.
The stability of this radical can be attributed to the delocalization of the radical to form a two-center three-electron N–O bond. The stability is reminiscent of the stability of nitric oxide and nitrogen dioxide. Additional stability is attributed to the steric protection provided by the four methyl groups adjacent to the aminoxyl group. These methyl groups serve as inert substituents, whereas any CH center adjacent to the aminoxyl would be subject to abstraction by the aminoxyl.
Regardless of the reasons for the stability of the radical, the O–H bond in the hydrogenated derivative (the hydroxylamine 1-hydroxy-2,2,6,6-tetramethylpiperidine) TEMPO–H is weak. With an O–H bond dissociation energy of about , this bond is about 30% weaker than a typical O–H bond.
Application in organic synthesis
TEMPO is employed in organic synthesis as a catalyst for the oxidation of primary alcohols to aldehydes. The actual oxidant is the N-oxoammonium salt. In a catalytic cycle with sodium hypochlorite as the stoichiometric oxidant, hypochlorous acid generates the N-oxoammonium salt from TEMPO.
One typical reaction example is the oxidation of (S)-(−)-2-methyl-1-butanol to (S)-(+)-2-methylbutanal: 4-Methoxyphenethyl alcohol is oxidized to the corresponding carboxylic acid in a system of catalytic TEMPO and sodium hypochlorite and a stoichiometric amount of sodium chlorite. TEMPO oxidations also exhibit chemoselectivity, being inert towards secondary alcohols, but the reagent will convert aldehydes to carboxylic acids.
The oxidation of TEMPO can be highly selective. It has been proven that secondary alcohols are more likely to be oxidized by TEMPO under an acidic environment. The reason is when in this condition, secondary alcohols are more easily able to provide an H− ion.
In cases where secondary oxidizing agents cause side reactions, it is possible to stoichiometrically convert TEMPO to the oxoammonium salt in a separate step. For example, in the oxidation of geraniol to geranial, 4-acetamido-TEMPO is first oxidized to the oxoammonium tetrafluoroborate.
TEMPO can also be employed in nitroxide-mediated radical polymerization (NMP), a controlled free radical polymerization technique that allows better control over the final molecular weight distribution. The TEMPO free radical can be added to the end of a growing polymer chain, creating a "dormant" chain that stops polymerizing. However, the linkage between the polymer chain and TEMPO is weak, and can be broken upon heating, which then allows the polymerization to continue. Thus, the chemist can control the extent of polymerization and also synthesize narrowly distributed polymer chains.
Industrial applications and analogues
TEMPO is sufficiently inexpensive for use on a laboratory scale. There is also industrial-scale manufacturer which can provide TEMPO at a reasonable price in large quantity. Structurally related analogues do exist, which are largely based on 4-hydroxy-TEMPO (TEMPOL). This is produced from acetone and ammonia, via triacetone amine, making it much less expensive. Other alternatives include polymer-supported TEMPO catalysts, which are economic due to their recyclability.
Industrial-scale examples of TEMPO-like compounds include hindered amine light stabilizers and polymerisation inhibitors.
See also
1-Hydroxy-2,2,6,6-tetramethylpiperidine, the reduced derivative of TEMPO
TEMPOL
Bobbitt's salt
N-Hydroxyphthalimide
References
External links
TEMPO
Free radicals
Amine oxides
Piperidines | TEMPO | [
"Chemistry",
"Biology"
] | 1,037 | [
"Free radicals",
"Functional groups",
"Senescence",
"Amine oxides",
"Biomolecules"
] |
30,864,591 | https://en.wikipedia.org/wiki/Networked%20control%20system | A networked control system (NCS) is a control system wherein the control loops are closed through a communication network. The defining feature of an NCS is that control and feedback signals are exchanged among the system's components in the form of information packages through a network.
Overview
The functionality of a typical NCS is established by the use of four basic elements:
Sensors, to acquire information,
Controllers, to provide decision and commands,
Actuators, to perform the control commands and
Communication network, to enable exchange of information.
The most important feature of an NCS is that it connects cyberspace to physical space enabling the execution of several tasks from long distance. In addition, NCSs eliminate unnecessary wiring reducing the complexity and the overall cost in designing and implementing the control systems. They can also be easily modified or upgraded by adding sensors, actuators, and controllers to them with relatively low cost and no major change in their structure. Furthermore, featuring efficient sharing of data between their controllers, NCSs are able to easily fuse global information to make intelligent decisions over large physical spaces.
Their potential applications are numerous and cover a wide range of industries, such as space and terrestrial exploration, access in hazardous environments, factory automation, remote diagnostics and troubleshooting, experimental facilities, domestic robots, aircraft, automobiles, manufacturing plant monitoring, nursing homes and tele-operations. While the potential applications of NCSs are numerous, the proven applications are few, and the real opportunity in the area of NCSs is in developing real-world applications that realize the area's potential.
Types of communication networks
Fieldbuses, e.g. CAN, LON etc.
IP/Ethernet
Wireless networks, e.g. Bluetooth, Zigbee, and Z-Wave. The term wireless networked control system (WNCS) is often used in this connection.
Problems and solutions
Advent and development of the Internet combined with the advantages provided by NCS attracted the interest of researchers around the globe. Along with the advantages, several challenges also emerged giving rise to many important research topics. New control strategies, kinematics of the actuators in the systems, reliability and security of communications, bandwidth allocation, development of data communication protocols, corresponding fault detection and fault tolerant control strategies, real-time information collection and efficient processing of sensors data are some of the relative topics studied in depth.
The insertion of the communication network in the feedback control loop makes the analysis and design of an NCS complex, since it imposes additional time delays in control loops or possibility of packages loss. Depending on the application, time-delays could impose severe degradation on the system performance.
To alleviate the time-delay effect, Y. Tipsuwan and M-Y. Chow, in ADAC Lab at North Carolina State University, proposed the gain scheduler middleware (GSM) methodology and applied it in iSpace. S. Munir and W.J. Book (Georgia Institute of Technology) used a Smith predictor, a Kalman filter and an energy regulator to perform teleoperation through the Internet.
K.C. Lee, S. Lee and H.H. Lee used a genetic algorithm to design a controller used in a NCS. Many other researchers provided solutions using concepts from several control areas such as robust control, optimal stochastic control, model predictive control, fuzzy logic etc.
A most critical and important issue surrounding the design of distributed NCSs with the successively increasing complexity is to meet the requirements on system reliability and dependability, while guaranteeing a high system performance over a wide operating range. This makes network based fault detection and diagnosis techniques, which are essential to monitor the system performance, receive more and more attention.
References
Further reading
D. Hristu-Varsakelis and W. S. Levine (Ed.): Handbook of Networked and Embedded Control Systems, 2005. .
S. Tatikonda, Control under communication constraints, MIT Ph.D dissertation, 2000. http://dspace.mit.edu/bitstream/1721.1/16755/1/48245028.pdf
O. Imer, Optimal estimation and control under communication network constraints, UIUC Ph.D. dissertation, 2005. http://decision.csl.uiuc.edu/~imer/phdsmallfont.pdf
Y. Q. Wang, H. Ye and G. Z. Wang. Fault detection of NCS based on eigendecomposition, adaptive evaluation and adaptive threshold. International Journal of Control, vol. 80, no. 12, pp. 1903–1911, 2007.
M. Mesbahi and M. Egerstedt. Graph Theoretic Methods in Multiagent Networks, Princeton University Press, 2010. . https://sites.google.com/site/mesbahiegerstedt/home
External links
Advanced Diagnosis Automation and Control Lab (NCSU)
Co-design Framework to Integrate Communication, Control, Computation and Energy Management in Networked Control Systems (FeedNetback Project)
Control engineering | Networked control system | [
"Mathematics",
"Engineering"
] | 1,036 | [
"Applied mathematics",
"Control theory",
"Control engineering",
"Dynamical systems"
] |
30,865,207 | https://en.wikipedia.org/wiki/Water%20quality%20modelling | Water quality modeling involves water quality based data using mathematical simulation techniques. Water quality modeling helps people understand the eminence of water quality issues and models provide evidence for policy makers to make decisions in order to properly mitigate water. Water quality modeling also helps determine correlations to constituent sources and water quality along with identifying information gaps. Due to the increase in freshwater usage among people, water quality modeling is especially relevant both in a local level and global level. In order to understand and predict the changes over time in water scarcity, climate change, and the economic factor of water resources, water quality models would need sufficient data by including water bodies from both local and global levels.
A typical water quality model consists of a collection of formulations representing physical mechanisms that determine position and momentum of pollutants in a water body. Models are available for individual components of the hydrological system such as surface runoff; there also exist basin wide models addressing hydrologic transport and for ocean and estuarine applications. Often finite difference methods are used to analyze these phenomena, and, almost always, large complex computer models are required.
Building A Model
Water quality models have different information, but generally have the same purpose, which is to provide evidentiary support of water issues. Models can be either deterministic or statistical depending on the scale with the base model, which is dependent on if the area is on a local, regional, or a global scale. Another aspect to consider for a model is what needs to be understood or predicted about that research area along with setting up any parameters to define the research. Another aspect of building a water quality model is knowing the audience and the exact purpose for presenting data like to enhance water quality management for water quality law makers for the best possible outcomes.
Formulations and associated Constants
Water quality is modeled by one or more of the following formulations
Advective Transport formulation
Dispersive Transport formulation
Surface Heat Budget formulation
Dissolved Oxygen Saturation formulation
Reaeration formulation
Carbonaceous Deoxygenation formulation
Nitrogenous Biochemical Oxygen Demand formulation
Sediment oxygen demand formulation (SOD)
Photosynthesis and Respiration formulation
pH and Alkalinity formulation
Nutrients formulation (fertilizers)
Algae formulation
Zooplankton formulation
Coliform bacteria formulation (e.g. Escherichia coli )
SPARROW Models
A SPARROW model is a SPAtially-Referenced Regression on Watershed attributes, which helps integrate water quality data with landscape information. More specifically the USGS used this model to display long-term changes within watersheds to further explain in-stream water measurement in relation to upstream sources, water quality, and watershed properties. These models predict data for various spatial scales and integrate streamflow data with water quality at numerous locations across the US. A SPARROW model used by the USGS focused on the nutrients in the Nation's major rivers and estuaries; this model helped create a better understanding of where nutrients come from, where they are transported to while in the water bodies, and where they end up (reservoirs, other estuaries, etc.).
See also
Hydrological transport models
Stochastic Empirical Loading and Dilution Model
Storm Water Management Model
Volumes of water on earth
Water resources
Water quality
Wastewater quality indicators
Streeter-Phelps equation
PCLake
References
U.S. Environmental Protection Agency (EPA). Environmental Research Laboratory, Athens, GA (1985). "Rates, Constants and Kinetics Formulations in Surface Water Quality Modeling." 2nd ed. Document no. EPA/600/3-85/040.
External links
SPARROW Water-Quality Modeling - US Geological Survey- US Geological Survey
BASINS - EPA environmental analysis system integrating GIS, national watershed data, environmental assessment and modeling tools
Water Quality Models and Tools - EPA
Models for Total Maximum Daily Load Studies - Washington State Department of Ecology
Catchment Modelling Toolkit - eWater Cooperative Research Centre, Australia
Water Evaluation And Planning (WEAP), an integrated water resources planning model, including water quality - Stockholm Environmental Institute (US)
Stochastic Empirical Loading and Dilution Model (SELDM) - US Geological Survey stormwater quality model
U.S. Army Corps of Engineers Water Quality - New water quality modeling software developed by the U.S. Army Corps of Engineers
Environmental science
Ecological experiments
Aquatic ecology
Chemical oceanography
Environmental engineering
Quality | Water quality modelling | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 861 | [
"Chemical engineering",
"Chemical oceanography",
"Civil engineering",
"Ecosystems",
"nan",
"Environmental engineering",
"Aquatic ecology"
] |
30,865,488 | https://en.wikipedia.org/wiki/Complementarity%20%28molecular%20biology%29 | In molecular biology, complementarity describes a relationship between two structures each following the lock-and-key principle. In nature complementarity is the base principle of DNA replication and transcription as it is a property shared between two DNA or RNA sequences, such that when they are aligned antiparallel to each other, the nucleotide bases at each position in the sequences will be complementary, much like looking in the mirror and seeing the reverse of things. This complementary base pairing allows cells to copy information from one generation to another and even find and repair damage to the information stored in the sequences.
The degree of complementarity between two nucleic acid strands may vary, from complete complementarity (each nucleotide is across from its opposite) to no complementarity (each nucleotide is not across from its opposite) and determines the stability of the sequences to be together. Furthermore, various DNA repair functions as well as regulatory functions are based on base pair complementarity. In biotechnology, the principle of base pair complementarity allows the generation of DNA hybrids between RNA and DNA, and opens the door to modern tools such as cDNA libraries.
While most complementarity is seen between two separate strings of DNA or RNA, it is also possible for a sequence to have internal complementarity resulting in the sequence binding to itself in a folded configuration.
DNA and RNA base pair complementarity
Complementarity is achieved by distinct interactions between nucleobases: adenine, thymine (uracil in RNA), guanine and cytosine. Adenine and guanine are purines, while thymine, cytosine and uracil are pyrimidines. Purines are larger than pyrimidines. Both types of molecules complement each other and can only base pair with the opposing type of nucleobase. In nucleic acid, nucleobases are held together by hydrogen bonding, which only works efficiently between adenine and thymine and between guanine and cytosine. The base complement A = T shares two hydrogen bonds, while the base pair G ≡ C has three hydrogen bonds. All other configurations between nucleobases would hinder double helix formation. DNA strands are oriented in opposite directions, they are said to be antiparallel.
A complementary strand of DNA or RNA may be constructed based on nucleobase complementarity. Each base pair, A = T vs. G ≡ C, takes up roughly the same space, thereby enabling a twisted DNA double helix formation without any spatial distortions. Hydrogen bonding between the nucleobases also stabilizes the DNA double helix.
Complementarity of DNA strands in a double helix make it possible to use one strand as a template to construct the other. This principle plays an important role in DNA replication, setting the foundation of heredity by explaining how genetic information can be passed down to the next generation. Complementarity is also utilized in DNA transcription, which generates an RNA strand from a DNA template. In addition, human immunodeficiency virus, a single-stranded RNA virus, encodes an RNA-dependent DNA polymerase (reverse transcriptase) that uses complementarity to catalyze genome replication. The reverse transcriptase can switch between two parental RNA genomes by copy-choice recombination during replication.
DNA repair mechanisms such as proof reading are complementarity based and allow for error correction during DNA replication by removing mismatched nucleobases. In general, damages in one strand of DNA can be repaired by removal of the damaged section and its replacement by using complementarity to copy information from the other strand, as occurs in the processes of mismatch repair, nucleotide excision repair and base excision repair.
Nucleic acids strands may also form hybrids in which single stranded DNA may readily anneal with complementary DNA or RNA. This principle is the basis of commonly performed laboratory techniques such as the polymerase chain reaction, PCR.
Two strands of complementary sequence are referred to as sense and anti-sense. The sense strand is, generally, the transcribed sequence of DNA or the RNA that was generated in transcription, while the anti-sense strand is the strand that is complementary to the sense sequence.
Self-complementarity and hairpin loops
Self-complementarity refers to the fact that a sequence of DNA or RNA may fold back on itself, creating a double-strand like structure. Depending on how close together the parts of the sequence are that are self-complementary, the strand may form hairpin loops, junctions, bulges or internal loops. RNA is more likely to form these kinds of structures due to base pair binding not seen in DNA, such as guanine binding with uracil.
Regulatory functions
Complementarity can be found between short nucleic acid stretches and a coding region or a transcribed gene, and results in base pairing. These short nucleic acid sequences are commonly found in nature and have regulatory functions such as gene silencing.
Antisense transcripts
Antisense transcripts are stretches of non coding mRNA that are complementary to the coding sequence. Genome wide studies have shown that RNA antisense transcripts occur commonly within nature. They are generally believed to increase the coding potential of the genetic code and add an overall layer of complexity to gene regulation. So far, it is known that 40% of the human genome is transcribed in both directions, underlining the potential significance of reverse transcription.
It has been suggested that complementary regions between sense and antisense transcripts would allow generation of double stranded RNA hybrids, which may play an important role in gene regulation. For example, hypoxia-induced factor 1α mRNA and β-secretase mRNA are transcribed bidirectionally, and it has been shown that the antisense transcript acts as a stabilizer to the sense script.
miRNAs and siRNAs
miRNAs, microRNA, are short RNA sequences that are complementary to regions of a transcribed gene and have regulatory functions. Current research indicates that circulating miRNA may be utilized as novel biomarkers, hence show promising evidence to be utilized in disease diagnostics. MiRNAs are formed from longer sequences of RNA that are cut free by a Dicer enzyme from an RNA sequence that is from a regulator gene. These short strands bind to a RISC complex. They match up with sequences in the upstream region of a transcribed gene due to their complementarity to act as a silencer for the gene in three ways. One is by preventing a ribosome from binding and initiating translation. Two is by degrading the mRNA that the complex has bound to. And three is by providing a new double-stranded RNA (dsRNA) sequence that Dicer can act upon to create more miRNA to find and degrade more copies of the gene. Small interfering RNAs (siRNAs) are similar in function to miRNAs; they come from other sources of RNA, but serve a similar purpose to miRNAs.
Given their short length, the rules for complementarity means that they can still be very discriminating in their targets of choice. Given that there are four choices for each base in the strand and a 20bp - 22bp length for a mi/siRNA, that leads to more than possible combinations. Given that the human genome is ~3.1 billion bases in length, this means that each miRNA should only find a match once in the entire human genome by accident.
Kissing hairpins
Kissing hairpins are formed when a single strand of nucleic acid complements with itself creating loops of RNA in the form of a hairpin. When two hairpins come into contact with each other in vivo, the complementary bases of the two strands form up and begin to unwind the hairpins until a double-stranded RNA (dsRNA) complex is formed or the complex unwinds back to two separate strands due to mismatches in the hairpins. The secondary structure of the hairpin prior to kissing allows for a stable structure with a relatively fixed change in energy. The purpose of these structures is a balancing of stability of the hairpin loop vs binding strength with a complementary strand. Too strong an initial binding to a bad location and the strands will not unwind quickly enough; too weak an initial binding and the strands will never fully form the desired complex. These hairpin structures allow for the exposure of enough bases to provide a strong enough check on the initial binding and a weak enough internal binding to allow the unfolding once a favorable match has been found.
---C G---
C G ---C G---
U A C G
G C U A
C G G C
A G C G
A A A G
C U A A
U CUU ---CCUGCAACUUAGGCAGG---
A GAA ---GGACGUUGAAUCCGUCC---
G A U U
U U U C
U C G C
G C C G
C G A U
A U G C
G C ---G C---
---G C---
Kissing hairpins meeting up at the top of the loops. The complementarity
of the two heads encourages the hairpin to unfold and straighten out to
become one flat sequence of two strands rather than two hairpins.
Bioinformatics
Complementarity allows information found in DNA or RNA to be stored in a single strand. The complementing strand can be determined from the template and vice versa as in cDNA libraries. This also allows for analysis, like comparing the sequences of two different species. Shorthands have been developed for writing down sequences when there are mismatches (ambiguity codes) or to speed up how to read the opposite sequence in the complement (ambigrams).
cDNA Library
A cDNA library is a collection of expressed DNA genes that are seen as a useful reference tool in gene identification and cloning processes. cDNA libraries are constructed from mRNA using RNA-dependent DNA polymerase reverse transcriptase (RT), which transcribes an mRNA template into DNA. Therefore, a cDNA library can only contain inserts that are meant to be transcribed into mRNA. This process relies on the principle of DNA/RNA complementarity. The end product of the libraries is double stranded DNA, which may be inserted into plasmids. Hence, cDNA libraries are a powerful tool in modern research.
Ambiguity codes
When writing sequences for systematic biology it may be necessary to have IUPAC codes that mean "any of the two" or "any of the three". The IUPAC code R (any purine) is complementary to Y (any pyrimidine) and M (amino) to K (keto). W (weak) and S (strong) are usually not swapped but have been swapped in the past by some tools. W and S denote "weak" and "strong", respectively, and indicate a number of the hydrogen bonds that a nucleotide uses to pair with its complementing partner. A partner uses the same number of the bonds to make a complementing pair.
An IUPAC code that specifically excludes one of the three nucleotides can be complementary to an IUPAC code that excludes the complementary nucleotide. For instance, V (A, C or G - "not T") can be complementary to B (C, G or T - "not A").
Ambigrams
Specific characters may be used to create a suitable (ambigraphic) nucleic acid notation for complementary bases (i.e. guanine = b, cytosine = q, adenine = n, and thymine = u), which makes it is possible to complement entire DNA sequences by simply rotating the text "upside down". For instance, with the previous alphabet, (GTCA) would read as (TGAC, reverse complement) if turned upside down.
Ambigraphic notations readily visualize complementary nucleic acid stretches such as palindromic sequences. This feature is enhanced when utilizing custom fonts or symbols rather than ordinary ASCII or even Unicode characters.
See also
Base pair
References
External links
Reverse complement tool
Reverse Complement Tool @ DNA.UTAH.EDU
Molecular biology | Complementarity (molecular biology) | [
"Chemistry",
"Biology"
] | 2,517 | [
"Biochemistry",
"Molecular biology"
] |
30,865,670 | https://en.wikipedia.org/wiki/Wolff%27s%20law | Wolff's law, developed by the German anatomist and surgeon Julius Wolff (1836–1902) in the 19th century, states that bone in a healthy animal will adapt to the loads under which it is placed. If loading on a particular bone increases, the bone will remodel itself over time to become stronger to resist that sort of loading. The internal architecture of the trabeculae undergoes adaptive changes, followed by secondary changes to the external cortical portion of the bone, perhaps becoming thicker as a result. The inverse is true as well: if the loading on a bone decreases, the bone will become less dense and weaker due to the lack of the stimulus required for continued remodeling. This reduction in bone density (osteopenia) is known as stress shielding and can occur as a result of a hip replacement (or other prosthesis). The normal stress on a bone is shielded from that bone by being placed on a prosthetic implant.
Mechanotransduction
The remodeling of bone in response to loading is achieved via mechanotransduction, a process through which forces or other mechanical signals are converted to biochemical signals in cellular signaling. Mechanotransduction leading to bone remodeling involves the steps of mechanocoupling, biochemical coupling, signal transmission, and cell response. The specific effects on bone structure depend on the duration, magnitude, and rate of loading, and it has been found that only cyclic loading can induce bone formation. When loaded, fluid flows away from areas of high compressive loading in the bone matrix. Osteocytes are the most abundant cells in bone and are also the most sensitive to such fluid flow caused by mechanical loading. Upon sensing a load, osteocytes regulate bone remodeling by signaling to other cells with signaling molecules or direct contact. Additionally, osteoprogenitor cells, which may differentiate into osteoblasts or osteoclasts, are also mechanosensors and will differentiate depending on the loading condition.
Computational models suggest that mechanical feedback loops can stably regulate bone remodeling by reorienting trabeculae in the direction of the mechanical loads.
Associated laws
In relation to soft tissue, Davis' law explains how soft tissue remodels itself according to imposed demands.
Refinement of Wolff's Law: Utah-Paradigm of Bone physiology (Mechanostat Theorem) by Harold Frost.
Examples
The racquet-holding arm bones of tennis players become stronger than those of the other arm. Their bodies have strengthened the bones in their racquet-holding arm, since it is routinely placed under higher than normal stresses. The most critical loads on a tennis player's arms occur during the serve. There are four main phases of a tennis serve, and the highest loads occur during external shoulder rotation and ball impact. The combination of high load and arm rotation results in a twisted bone density profile.
Weightlifters often display increases in bone density in response to their training.
Astronauts often suffer from the reverse: being in a microgravity environment, they tend to lose bone density.
The deforming effects of torticollis on craniofacial development in children.
See also
Functional matrix hypothesis
Iron Shirt, Wushu/Kungfu bone conditioning
Osteogenic loading
References
Das Gesetz der Transformation der Knochen - 1892. Reprint: Pro Business, Berlin 2010, .
External links
Julius Wolff Institut, Charité - Universitätsmedizin Berlin, main research areas are the regeneration and biomechanics of the musculoskeletal system and the improvement of joint replacement.
Musculoskeletal system
Biological defense mechanisms | Wolff's law | [
"Biology"
] | 765 | [
"Behavior",
"Biological interactions",
"Biological defense mechanisms",
"Organ systems",
"Musculoskeletal system"
] |
30,865,852 | https://en.wikipedia.org/wiki/O-minimal%20theory | In mathematical logic, and more specifically in model theory, an infinite structure (M,<,...) that is totally ordered by < is called an o-minimal structure if and only if every definable subset X ⊆ M (with parameters taken from M) is a finite union of intervals and points.
O-minimality can be regarded as a weak form of quantifier elimination. A structure M is o-minimal if and only if every formula with one free variable and parameters in M is equivalent to a quantifier-free formula involving only the ordering, also with parameters in M. This is analogous to the minimal structures, which are exactly the analogous property down to equality.
A theory T is an o-minimal theory if every model of T is o-minimal. It is known that the complete theory T of an o-minimal structure is an o-minimal theory. This result is remarkable because, in contrast, the complete theory of a minimal structure need not be a strongly minimal theory, that is, there may be an elementarily equivalent structure that is not minimal.
Set-theoretic definition
O-minimal structures can be defined without recourse to model theory. Here we define a structure on a nonempty set M in a set-theoretic manner, as a sequence S = (Sn), n = 0,1,2,... such that
Sn is a boolean algebra of subsets of Mn
if D ∈ Sn then M × D and D ×M are in Sn+1
the set {(x1,...,xn) ∈ Mn : x1 = xn} is in Sn
if D ∈ Sn+1 and π : Mn+1 → Mn is the projection map on the first n coordinates, then π(D) ∈ Sn.
For a subset A of M, we consider the smallest structure S(A) containing S such that every finite subset of A is contained in S1. A subset D of Mn is called A-definable if it is contained in Sn(A); in that case A is called a set of parameters for D. A subset is called definable if it is A-definable for some A.
If M has a dense linear order without endpoints on it, say <, then a structure S on M is called o-minimal (respect to <) if it satisfies the extra axioms
<li>the set < (={(x,y) ∈ M2 : x < y}) is in S2
<li>the definable subsets of M are precisely the finite unions of intervals and points.
The "o" stands for "order", since any o-minimal structure requires an ordering on the underlying set.
Model theoretic definition
O-minimal structures originated in model theory and so have a simpler — but equivalent — definition using the language of model theory. Specifically if L is a language including a binary relation <, and (M,<,...) is an L-structure where < is interpreted to satisfy the axioms of a dense linear order, then (M,<,...) is called an o-minimal structure if for any definable set X ⊆ M there are finitely many open intervals I1,..., Ir in M ∪ {±∞} and a finite set X0 such that
Examples
Examples of o-minimal theories are:
The complete theory of dense linear orders in the language with just the ordering.
RCF, the theory of real closed fields.
The complete theory of the real field with restricted analytic functions added (i.e. analytic functions on a neighborhood of [0,1]n, restricted to [0,1]n; note that the unrestricted sine function has infinitely many roots, and so cannot be definable in an o-minimal structure.)
The complete theory of the real field with a symbol for the exponential function by Wilkie's theorem. More generally, the complete theory of the real numbers with Pfaffian functions added.
The last two examples can be combined: given any o-minimal expansion of the real field (such as the real field with restricted analytic functions), one can define its Pfaffian closure, which is again an o-minimal structure. (The Pfaffian closure of a structure is, in particular, closed under Pfaffian chains where arbitrary definable functions are used in place of polynomials.)
In the case of RCF, the definable sets are the semialgebraic sets. Thus the study of o-minimal structures and theories generalises real algebraic geometry. A major line of current research is based on discovering expansions of the real ordered field that are o-minimal. Despite the generality of application, one can show a great deal about the geometry of set definable in o-minimal structures. There is a cell decomposition theorem, Whitney and Verdier stratification theorems and a good notion of dimension and Euler characteristic.
Moreover, continuously differentiable definable functions in a o-minimal structure satisfy a generalization of Łojasiewicz inequality, a property that has been used to guarantee the convergence of some non-smooth optimization methods, such as the stochastic subgradient method (under some mild assumptions).
See also
Semialgebraic set
Real algebraic geometry
Strongly minimal theory
Weakly o-minimal structure
C-minimal theory
Tame topology
Notes
References
External links
Model Theory preprint server
Real Algebraic and Analytic Geometry Preprint Server
Mathematical logic
Model theory
Real algebraic geometry
Topology | O-minimal theory | [
"Physics",
"Mathematics"
] | 1,162 | [
"Mathematical logic",
"Topology",
"Space",
"Model theory",
"Geometry",
"Spacetime"
] |
43,156,192 | https://en.wikipedia.org/wiki/Helion%20Energy | Helion Energy, Inc. is an American fusion research company, located in Everett, Washington. They are developing a magneto-inertial fusion technology to produce helium-3 and fusion power via aneutronic fusion, which could produce low-cost clean electric energy using a fuel that can be derived exclusively from water.
History
The company was founded in 2013 by David Kirtley, John Slough, Chris Pihl, and George Votroubek. The management team won the 2013 National Cleantech Open Energy Generation competition and awards at the 2014 ARPA-E Future Energy Startup competition, were members of the 2014 Y Combinator program, and were awarded a 2015 ARPA-E ALPHA contract, "Staged Magnetic Compression of FRC Targets to Fusion Conditions".
In 2022, the company was one of five finalists for the 2022 GeekWire Awards for innovation of the year, specifically for fusion energy start up category.
In 2023, the company was one of five finalists for the 2023 GeekWire Best workplaces of the year.
On May 10, 2023, Helion Energy announced that Microsoft will become the first customer of Helion Energy, and Helion Energy will provide fusion power to Microsoft starting in 2028.
Technology
This system is intended to operate at 1 Hz, injecting plasma, compressing it to fusion conditions, expanding it, and recovering the energy to produce electricity. The pulsed-fusion system that is used is theoretically able to run 24/7 for electricity production. Due to its compact size, the systems may be able to replace current fossil fuel infrastructure without major needs for investment.
Fuel
Helion uses a combination of deuterium and as fuel. Deuterium and 3He allows mostly aneutronic fusion, releasing only 5% of its energy in the form of fast neutrons. Commercial 3He is rare and expensive. Instead Helion produces 3He by deuteron-deuteron (D-D) side reactions to the deuterium - 3He reactions. D-D fusion has an equal chance of producing a 3He atom and of producing a tritium atom plus a proton. Tritium beta decays into more 3He with a half-life of 12.32 years. Helion plans to capture the 3He produced this way and reuse it as fuel. Helion has a patent on this process.
Confinement
This fusion approach uses the magnetic field of a field-reversed configuration (FRC) plasmoid (operated with solid state electronics derived from power switching electronics in wind turbines) to prevent plasma energy losses. An FRC is a magnetized plasma configuration notable for its closed field lines, high beta and lack of internal penetrations.
Compression
Two FRC plasmoids are accelerated to velocities exceeding 300 km/s with pulsed magnetic fields which then merge into a single plasmoid at high pressure. Published plans target compressing fusion plasmas to 12 tesla (T).
Energy generation
Energy is captured by direct energy conversion that uses the expansion of the plasma to induce a current in the magnetic compression- and acceleration- coils. Separately it translates high-energy fusion products, such as alpha particles directly into a voltage. 3He produced by D-D fusion carries 0.82 MeV of energy. Tritium byproducts carry 1.01 MeV, while the proton produces 3.02 MeV.
This approach eliminates the need for steam turbines, cooling towers, and their associated energy losses. According to the company, this process also allows the recovery of a significant part of the input energy at a round-trip efficiency of over 95%
Development history
The company's Fusion Engine is based on the Inductive Plasmoid Accelerator (IPA) experiments performed from 2005 through 2012. These experiments used deuterium-deuterium fusion, which produced a 2.45 MeV neutron in half of the reactions. The IPA experiments claimed 300 km/s velocities, deuterium neutron production, and 2 keV deuterium ion temperatures. Helion and MSNW published articles describing a deuterium-tritium implementation that is the easiest to achieve but generates 14 MeV neutrons. The Helion team published peer-reviewed research demonstrating D-D neutron production in 2011.
4th prototype, 'Grande'
In 2014, according to the timeline on the company website, Grande, Helion's 4th fusion prototype, was developed to test high field operation. Grande achieves magnetic field compression of 4 tesla, forms cm-scale FRCs, and reaches plasma temperatures of 5 keV. Grande outperforms any other private fusion company.
In 2015, Helion demonstrated the first direct magnetic energy recovery from a subscale pulsed magnetic system, utilizing modern high-voltage insulated gate bipolar transistors to recover energy at over 95% round-trip efficiency for over 1 million pulses. In a smaller system, the team demonstrated the formation of more than 1 billion FRCs.
5th prototype, 'Venti'
In 2018, the 5th prototype, "Venti" had magnetic fields of 7T and at high density, an ion temperature of 2 keV. Helion detailed D-D fusion experiments producing neutrons in an October 2018 report at the United States Department of Energy's ARPA-E's annual ALPHA program meeting. Experiments that year achieved plasmas with multi-keV temperatures and a triple product of .
6th prototype, 'Trenta'
In 2021, the firm announced that after a 16-month test cycle with more than 10,000 pulses, its sixth prototype, Trenta, had reached 100 million degrees C, the temperature they would run a commercial reactor at. Magnetic compression fields exceeded 10 T, ion temperatures surpassed 8 keV, and electron temperatures exceeded 1 keV. The company further reported ion densities up to and confinement times of up to 0.5 ms.
7th prototype, 'Polaris'
Helion's seventh-generation prototype, Project Polaris has been in development since 2021, with completion expected in 2024. The device is expected to increase the pulse rate from one pulse every 10 minutes to one pulse per second for short periods. This prototype is expected to be able to heat fusion plasma up to temperatures greater than 100 million degrees C. Polaris is planned to be 25% larger than Trenta to ensure that ions do not damage the vessel walls.
8th prototype
, an eighth iteration was in the design stage.
Overview
Funding
Helion Energy received $7 million in funding from NASA, the United States Department of Energy and the Department of Defense, followed by $1.5 million from the private sector in August 2014, through the seed accelerators Y Combinator and Mithril Capital Management.
In 2021, the company was valued at three billion dollars. As of late 2021, investment totaled $77.8M. In November 2021, Helion received $500 million in Series E funding, with an additional $1.7 billion of commitments tied to specific milestones. The funding was mainly led by Sam Altman, CEO of OpenAI, who is also the executive chairman of Helion.
Criticism
Retired Princeton Plasma Physics Laboratory researcher Daniel Jassby mentioned Helion Energy in a letter included in the American Physical Society newsletter Physics & Society (April 2019) as being among fusion start-ups allegedly practicing "voodoo fusion" rather than legitimate science. He noted that the company is one of several that has continually claimed "power in 5 to 10 years, but almost all have apparently never produced a single D-D fusion reaction". However, Helion published peer-reviewed research demonstrating D-D neutron production as early as 2011 and according to the independent JASON review team, VENTI, a sub-scale prototype Helion developed partially for the ALPHA program, achieved initial results of seconds energy confinement time and a temperature of 2 keV in 2018. In 2020 Helion was the first private company to successfully demonstrate thermonuclear fusion plasmas exceeding 9 keV with expected D-D fusion reactions and neutrons and a triple product greater than Lawson criterion.
The same 2018 MITRE/JASON report, commissioned by the US Department of Energy's ARPA-E, said that Helion project leads or literature stated that they need a 40 tesla magnetic field for commercial viability, had the capability for an 8 Tesla field in their prototype, and projected they would achieve breakeven in 2023. The report stated that the primary challenge with Helion's approach is "whether they can simultaneously achieve sufficiently high compression while maintaining plasma stability". As of 2023, their prototype has a 10 tesla field and they project breakeven in 2024.
See also
Fusion Industry Association
General Fusion
TAE Technologies
References
External links
Accelerator physics
Fusion power companies
Engineering companies of the United States
Technology companies of the United States | Helion Energy | [
"Physics"
] | 1,795 | [
"Accelerator physics",
"Applied and interdisciplinary physics",
"Experimental physics"
] |
43,156,657 | https://en.wikipedia.org/wiki/Interstitial%20site | In crystallography, interstitial sites, holes or voids are the empty space that exists between the packing of atoms (spheres) in the crystal structure.
The holes are easy to see if you try to pack circles together; no matter how close you get them or how you arrange them, you will have empty space in between. The same is true in a unit cell; no matter how the atoms are arranged, there will be interstitial sites present between the atoms. These sites or holes can be filled with other atoms (interstitial defect). The picture with packed circles is only a 2D representation. In a crystal lattice, the atoms (spheres) would be packed in a 3D arrangement. This results in different shaped interstitial sites depending on the arrangement of the atoms in the lattice.
Close packed
A close packed unit cell, both face-centered cubic and hexagonal close packed, can form two different shaped holes. Looking at the three green spheres in the hexagonal packing illustration at the top of the page, they form a triangle-shaped hole. If an atom is arranged on top of this triangular hole it forms a tetrahedral interstitial hole. If the three atoms in the layer above are rotated and their triangular hole sits on top of this one, it forms an octahedral interstitial hole. In a close-packed structure there are 4 atoms per unit cell and it will have 4 octahedral voids (1:1 ratio) and 8 tetrahedral voids (1:2 ratio) per unit cell. The tetrahedral void is smaller in size and could fit an atom with a radius 0.225 times the size of the atoms making up the lattice. An octahedral void could fit an atom with a radius 0.414 times the size of the atoms making up the lattice. An atom that fills this empty space could be larger than this ideal radius ratio, which would lead to a distorted lattice due to pushing out the surrounding atoms, but it cannot be smaller than this ratio.
Face-centered cubic (FCC)
If half of the tetrahedral sites of the parent FCC lattice are filled by ions of opposite charge, the structure formed is the zincblende crystal structure. If all the tetrahedral sites of the parent FCC lattice are filled by ions of opposite charge, the structure formed is the fluorite structure or antifluorite structure. If all the octahedral sites of the parent FCC lattice are filled by ions of opposite charge, the structure formed is the rock-salt structure.
Hexagonal close packed (HCP)
If half of the tetrahedral sites of the parent HCP lattice are filled by ions of opposite charge, the structure formed is the wurtzite crystal structure. If all the octahedral sites of the anion HCP lattice are filled by cations, the structure formed is the nickel arsenide structure.
Simple cubic
A simple cubic unit cell, with stacks of atoms arranged as if at the eight corners of a cube would form a single cubic hole or void in the center. If these voids are occupied by ions of opposite charge from the parent lattice, the cesium chloride structure is formed.
Body-centered cubic (BCC)
A body-centered cubic unit cell has six octahedral voids located at the center of each face of the unit cell, and twelve further ones located at the midpoint of each edge of the same cell, for a total of six net octahedral voids. Additionally, there are 24 tetrahedral voids located in a square spacing around each octahedral void, for a total of twelve net tetrahedral voids. These tetrahedral voids are not local maxima and are not technically voids, but they do occasionally appear in multi-atom unit cells.
Interstitial defect
An interstitial defect refers to additional atoms occupying some interstitial sites at random as crystallographic defects in a crystal which normally has empty interstitial sites by default.
References
Crystallography
Crystals
Materials science | Interstitial site | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 839 | [
"Applied and interdisciplinary physics",
"Materials science",
"Crystallography",
"Crystals",
"Condensed matter physics",
"nan"
] |
43,161,284 | https://en.wikipedia.org/wiki/List%20of%20common%20EMC%20test%20standards | The following list outlines a number of electromagnetic compatibility (EMC) standards which are known at the time of writing to be either available or have been made available for public comment. These standards attempt to standardize product EMC performance, with respect to conducted or radiated radio interference from electrical or electronic equipment, imposition of other types of disturbance on the mains supply by such equipment, and the sensitivity of such equipment to received interference.
The legal status of these standards varies according to the jurisdiction. Standards called up by the European Union's EMC Directive effectively have the force of law in the EU.
IEC standards
The IEC standards on Electromagnetic compatibility (EMC) are mostly part of the IEC 61000 family. Below are some examples.
IEC/TR EN 61000-1-1, Electromagnetic compatibility (EMC) - Part 1: General - Section 1: Application and interpretation of fundamental definitions and terms
IEC/TR EN 61000-2-1, Electromagnetic compatibility (EMC) - Part 2: Environment - Section 1: Description of the environment - Electromagnetic environment for low-frequency conducted disturbances and signaling in public power supply systems
IEC/TR EN 61000-2-3, Electromagnetic compatibility (EMC) - Part 2: Environment - Section 3: Description of the environment - Radiated and non-network-frequency-related conducted phenomena
IEC EN 61000-3-2, Electromagnetic compatibility (EMC) - Part 3-2 - Limits - Limits for harmonic current emissions (equipment input current ≤ 16 A per phase)
IEC EN 61000-3-3, Electromagnetic compatibility (EMC) - Part 3-3 - Limits - Limitation of voltage changes, voltage fluctuations and flicker in public low-voltage supply systems, for equipment with rated current ≤ 16 A per phase and not subject to conditional connection
IEC EN 61000-3-4, Electromagnetic compatibility (EMC) - Part 3-4: Limits - Limitation of emission of harmonic currents in low-voltage power supply systems for equipment with rated current greater than 16 A (note: for currents > 16 A and ≤ 75 A per phase this standard should be replaced with IEC EN 61400-3-12)
IEC/TS EN 61040-3-5, Electromagnetic compatibility (EMC) - Part 4-5: Limits - Limitation of voltage fluctuations and flicker in low-voltage power supply systems for equipment with rated current greater than 75 A
IEC EN 61040-3-11, Electromagnetic compatibility (EMC) - Part 3-11: Limits - Limitation of voltage changes, voltage fluctuations and flicker in public low-voltage supply systems - Equipment with rated current ≤ 75 A and subject to conditional connection
IEC EN 61000-3-12, Electromagnetic compatibility (EMC) - Part 3-12: Limits - Limits for harmonic currents produced by equipment connected to public low-voltage systems with input current > 16 A and ≤ 75 A per phase
IEC EN 61000-4-2, Electromagnetic compatibility (EMC)- Part 4-2: Testing and measurement techniques - Electrostatic discharge immunity test
IEC EN 61000-4-3, Electromagnetic compatibility (EMC)- Part 4-3: Testing and measurement techniques - Radiated, radio-frequency, electromagnetic field immunity test
IEC EN 61000-4-4, Electromagnetic compatibility (EMC) - Part 4-4: Testing and measurement techniques - Electrical fast transient/burst immunity test
IEC EN 61000-4-5, Electromagnetic compatibility (EMC) - Part 4-5: Testing and measurement techniques - Surge immunity test
IEC EN 61000-4-6, Electromagnetic compatibility (EMC) - Part 4-6: Testing and measurement techniques - Immunity to conducted disturbances, induced by radio-frequency fields
IEC EN 61000-4-7, Electromagnetic compatibility (EMC) - Part 4-7: Testing and measurement techniques - General guide on harmonics and interharmonics measurements and instrumentation, for power supply systems and equipment connected thereto
IEC EN 61000-4-8, Electromagnetic compatibility (EMC) - Part 4-8: Testing and measurement techniques - Power frequency magnetic field immunity test
IEC EN 61000-4-9, Electromagnetic compatibility (EMC) - Part 4-9: Testing and measurement techniques - Pulse magnetic field immunity test
IEC EN 61000-4-11, Electromagnetic compatibility (EMC) - Part 4-11: Testing and measurement techniques - Voltage dips, short interruptions and voltage variations immunity tests
IEC EN 61000-4-13, Electromagnetic compatibility (EMC) - Part 4-13: Testing and measurement techniques - Harmonics and interharmonics including mains signalling at a.c. power port, low frequency immunity tests
IEC EN 61000-4-30, Electromagnetic compatibility (EMC) - Part 4-30: Testing and measurement techniques - Power Quality measurement methods
IEC EN 61000-4-34, Electromagnetic compatibility (EMC) - Part 4-34: Testing and measurement techniques - Voltage dips, short interruptions and voltage variations immunity tests for equipment with mains current more than 16 A per phase
IEC EN 61000-6-1, Electromagnetic compatibility (EMC) - Part 6-1: Generic standards - Immunity for residential, commercial and light-industrial environments
IEC EN 61000-6-2, Electromagnetic compatibility (EMC) - Part 6-2: Generic standards - Immunity for industrial environments
IEC EN 61000-6-3, Electromagnetic compatibility (EMC) - Part 6-3: Generic standards - Emission standard for residential, commercial and light-industrial environments
IEC EN 61000-6-4, Electromagnetic compatibility (EMC) - Part 6-4: Generic standards - Emission standard for industrial environments
IEC EN 61000-6-5, Electromagnetic compatibility (EMC) - Part 6-5: Generic standards - Immunity for equipment used in power station and substation environment
IEC EN 61000-6-6, Electromagnetic compatibility (EMC) - Part 6-6: Generic standards - HEMP immunity for indoor equipment
IEC EN 61000-6-7, Electromagnetic compatibility (EMC) - Part 6-7: Generic standards - Immunity requirements for equipment intended to perform functions in a safety-related system (functional safety) in industrial locations
IEC EN 61000-6-8, Electromagnetic compatibility (EMC) - Part 6-8: Generic standards - Emission standard for professional equipment in commercial and light-industrial locations
CISPR standards
CISPR is the acronym of Comité International Spécial des Perturbations Radio, or the International Special Committee for Radio Protection of IEC. CISPR Standards aim to the protection of radio reception in the range 9 kHz to 400 GHz from interference caused by operation of electrical or electronic appliances and systems in the electromagnetic environment. CISPR standards cover product emission and immunity requirements as well as defining test methods and equipment.
CISPR standards are divided into the following categories:
Basic Standards
They give the general and fundamental conditions or rules for the assessment of EMC and related performance of all products, systems or installations, and serve as reference documents for CISPR Generic and Product (Family) Standards. Basic Standards are general and hence are not dedicated to specific product families or products; they relate to general information, to the disturbing phenomena and to the measurement or testing techniques. They do not contain any prescribed limits or any product/system related performance specifications. However, methods and guidance on how to generate appropriate limits for the protection of radio reception are given.
The following are CISPR Basic EMC Standards:
CISPR 16-1-1, Specification for radio disturbance and immunity measuring apparatus and methods - Part 1-1: Radio disturbance and immunity measuring apparatus - Measuring apparatus.
CISPR 16-1-2, Specification for radio disturbance and immunity measuring apparatus and methods - Part 1-2: Radio disturbance and immunity measuring apparatus - Coupling devices for conducted disturbance measurements.
CISPR 16-1-3, Specification for radio disturbance and immunity measuring apparatus and methods - Part 1-3: Ancillary equipment – Disturbance power.
CISPR 16-1-4, Specification for radio disturbance and immunity measuring apparatus and methods - Part 1-4: Antennas and test sites for radiated disturbance measurements.
CISPR 16-1-5, Specification for radio disturbance and immunity measuring apparatus and methods - Part 1-5: Antenna calibration sites & reference test sites for 5 MHz to 18 GHz.
CISPR 16-1-6, Specification for radio disturbance and immunity measuring apparatus and methods - Part 1-6: EMC antenna calibration.
CISPR 16-2-1, Specification for radio disturbance and immunity measuring apparatus and methods - Part 2-1: Conducted disturbance measurements.
CISPR 16-2-2, Specification for radio disturbance and immunity measuring apparatus and methods - Part 2-2: Measurement of disturbance power.
CISPR 16-2-3, Specification for radio disturbance and immunity measuring apparatus and methods - Part 2-3: Radiated disturbance measurements.
CISPR 16-2-4, Specification for radio disturbance and immunity measuring apparatus and methods - Part 2-4: Immunity measurements.
CISPR 16-4-2, Specification for radio disturbance and immunity measurement apparatus and methods - Part 4-2: Measurement instrumentation uncertainty.
CISPR 17, Methods of measurement of the suppression characteristics of passive EMC filtering devices.
IEC 61000-4-20, Testing and measurement techniques - Emission and immunity testing in transverse electromagnetic (TEM) waveguides.
IEC 61000-4-21, Testing and measurement techniques - Reverberation chamber test methods.
IEC 61000-4-22, Testing and measurement techniques - Radiated emissions and immunity measurements in fully anechoic rooms (FARs).
Generic Standards
Generic EMC Standards are standards related to a particular environment, which specify the set of essential EMC requirements and test procedures, applicable to all the products or systems intended for operation in this environment, provided that no specific EMC Standards for a particular product family, product, system or installation exist. Limits are included, and reference is made to the test procedures given in the relevant Basic Standards.
The following are CISPR Generic EMC Standards:
IEC 61000-6-1, Electromagnetic compatibility (EMC) - Part 6-1: Generic standards - Immunity for residential, commercial and light-industrial environments
IEC 61000-6-2, Electromagnetic compatibility (EMC) - Part 6-2: Generic standards - Immunity for industrial environments
IEC 61000-6-3, Electromagnetic compatibility (EMC) - Part 6-3: Generic standards - Emission standard for equipment in residential environments.
IEC 61000-6-4, Electromagnetic compatibility (EMC) - Part 6-4: Generic standards - Emission standard for industrial environments.
IEC 61000-6-5, Electromagnetic compatibility (EMC) - Part 6-5: Generic standards - Immunity for equipment used in power station and substation environment
IEC 61000-6-6, Electromagnetic compatibility (EMC) - Part 6-6: Generic standards - HEMP immunity for indoor equipment.
IEC 61000-6-7, Electromagnetic compatibility (EMC) - Part 6-7: Generic standards - Immunity requirements for equipment intended to perform functions in a safety-related system (functional safety) in industrial locations.
IEC 61000-6-8, Electromagnetic compatibility (EMC) - Part 6-8: Generic standards - Emission standard for professional equipment in commercial and light-industrial locations.
Product (Family) Standards
Product (Family) Standards define specific EMC requirements, test procedures and limits dedicated to particular products, systems or installations for which specific conditions must be considered.
The following are CISPR Product (Family) Standards:
CISPR 11, Industrial, scientific and medical (ISM) radio-frequency equipment - Radio-frequency disturbance characteristics - Limits and methods of measurement.
CISPR 12, Vehicles, boats and internal combustion engine driven devices - Radio disturbance characteristics - Limits and methods of measurement for the protection of off-board receivers.
CISPR 13, Sound and television broadcast receivers and associated equipment - Radio disturbance characteristics - Limits and methods of measurement
(Note: CISPR 13 has been replaced by CISPR 32)
CISPR 14-1, Electromagnetic compatibility - Requirements for household appliances, electric tools and similar apparatus - Part 1: Emission.
CISPR 14-2, Electromagnetic compatibility - Requirements for household appliances, electric tools and similar apparatus - Part 2: Immunity - Product family standard.
CISPR 15, Limits and methods of measurement of radio disturbance characteristics of electrical lighting and similar equipment.
CISPR 20, Sound and television broadcast receivers and associated equipment - Immunity characteristics - Limits and methods of measurement
(Note: CISPR 20 has been replaced by CISPR 35)
CISPR 22, Information technology equipment - Radio disturbance characteristics - Limits and methods of measurement
(Note: CISPR 22 has been replaced by CISPR 32)
CISPR 24, Information technology equipment - Immunity characteristics - Limits and methods of measurement
(Note: CISPR 24 has been replaced by CISPR 35)
CISPR 25, Vehicles, boats and internal combustion engine driven devices - Radio disturbance characteristics - Limits and methods of measurement for the protection of on-board receivers.
CISPR 32, Electromagnetic Compatibility of multimedia equipment – Emission requirements.
CISPR 35, Electromagnetic Compatibility of multimedia equipment – Immunity requirements.
CISPR 36, Electric and hybrid electric road vehicles - Radio disturbance characteristics - Limits and methods of measurement for the protection of off-board receivers below 30 MHz.
In the CISPR Guide, March 2021 there is a non-exhaustive selection list of products and the appropriate CISPR standards to be applied.
ISO standards
The following are ISO standards on automotive EMC issues.
ISO 7637, Road vehicles - Electrical disturbances from conduction and coupling
ISO 11452-1, Road vehicles - Vehicle test methods for electrical disturbances from narrowband radiated electromagnetic energy - Part 1: General and definitions
ISO 11452-2, Road vehicles - Vehicle test methods for electrical disturbances from narrowband radiated electromagnetic energy - Part 2: Off-vehicle radiation source
ISO 11452-3, Road vehicles - Vehicle test methods for electrical disturbances from narrowband radiated electromagnetic energy - Part 3: On-board transmitter simulation
ISO 11452-4, Road vehicles - Vehicle test methods for electrical disturbances from narrowband radiated electromagnetic energy - Part 4: Bulk current injection (BCI)
ISO 11452-5, Road Vehicles - Component test methods for electrical disturbances from narrowband radiated electromagnetic energy - Part 5: Stripline
ISO 11452-6, Road Vehicles - Component test methods for electrical disturbances from narrowband radiated electromagnetic energy - Part 6: Parallel plate antenna
ISO 11452-7, Road Vehicles - Component test methods for electrical disturbances from narrowband radiated electromagnetic energy - Part 7: Direct radio frequency (RF) power injection
ISO 11452-8, Road Vehicles - Component test methods for electrical disturbances from narrowband radiated electromagnetic energy - Part 8: Immunity to magnetic fields
ISO 11452-9, Road Vehicles - Component test methods for electrical disturbances from narrowband radiated electromagnetic energy - Part 9: Portable transmitters
ISO 11452-10, Road Vehicles - Component test methods for electrical disturbances from narrowband radiated electromagnetic energy - Part 10: Immunity to conducted disturbances in the extended audio frequency range
ISO 11452, Road vehicles - Electrical disturbances by narrowband radiated electromagnetic energy - Component test methods
ISO 13766, Earthmoving Machinery - Electromagnetic Compatibility
ISO 14982, Agricultural and forestry machinery—Electromagnetic compatibility—Test methods and acceptance criteria
SAE Electromagnetic Compatibility (EMC) Standards committee
J1113/1, Electromagnetic Compatibility Measurement Procedures and Limits for Components of Vehicles, Boats (up to 15 m), and Machines (Except Aircraft) (16.6 Hz to 18 GHz)
J1113/11, Immunity to Conducted Transients on Power Leads
J1113/12, Electrical Interference by Conduction and Coupling—Capacitive and Inductive Coupling via Lines Other than Supply Lines
J1113/13, Electromagnetic Compatibility Measurement Procedure for Vehicle Components—Part 13: Immunity to Electrostatic Discharge
J1113/21, Electromagnetic Compatibility Measurement Procedure for Vehicle Components—Part 21: Immunity to Electromagnetic Fields, 30 MHz to 18 GHz, Absorber-Lined Chamber
J1113/26, Electromagnetic Compatibility Measurement Procedure for Vehicle Components—Immunity to AC Power Line Electric Fields
J1113/27, Electromagnetic Compatibility Measurements Procedure for Vehicle Components—Part 27: Immunity to Radiated Electromagnetic Fields—Mode Stir Reverberation Method
J1113/4, Immunity to Radiated Electromagnetic Fields—Bulk Current Injection (BCI) Method
J1752/1, Electromagnetic Compatibility Measurement Procedures for Integrated Circuits - Integrated Circuit EMC Measurement Procedures—General and Definitions
J1752/2, Measurement of Radiated Emissions from Integrated Circuits—Surface Scan Method (Loop Probe Method) 10 MHz to 3 GHz
J1752/3, Measurement of Radiated Emissions from Integrated Circuits—TEM/Wideband TEM (GTEM) Cell Method; TEM Cell (150 kHz to 1 GHz), Wideband TEM Cell (150 kHz to 8 GHz)
J1812, Function Performance Status Classification for EMC Immunity Testing
J2556, Radiated Emissions (RE) Narrowband Data Analysis—Power Spectral Density (PSD)
J2628, Characterization—Conducted Immunity
J551/1, Performance Levels and Methods of Measurement of Electromagnetic Compatibility of Vehicles, Boats (up to 15 m), & Machines (16.6 Hz to 18 GHz)
J551/15, Vehicle Electromagnetic Immunity—Electrostatic Discharge (ESD)
J551/16, Electromagnetic Immunity—Off-Vehicle Source (Reverberation Chamber Method)—Part 16: Immunity to Radiated Electromagnetic Fields
J551/17, Vehicle Electromagnetic Immunity—Power Line Magnetic Fields
J551/5, Performance Levels and Methods of Measurement of Magnetic and Electric Field Strength from Electric Vehicles, Broadband, 9 kHz To 30 MHz
European standards concerning unwanted electrical emissions
EN 50 081 part1 European Generic emission standard, part1: Domestic, commercial and light industry environment, replaced by EN61000-6-3
EN 50 081 part2 European Generic emission standard, part2: industrial environment, replaced by EN61000-6-4
EN 55 011 European limits and methods of measurement of radio disturbance characteristics for scientific and medical equipment
EN 55 013 European limits and methods of measurement of radio disturbance characteristics of broadcast receivers
EN 55 014 European limits and methods of measurement of radio disturbance characteristics of household appliances and power tools, replaced by EN55014-1, and immunity part is covered by EN55014-2
EN 55 015 European limits and methods of measurement of radio disturbance characteristics of fluorescent lamps
EN 55 022 European limits and methods of measurement of radio disturbance characteristics of information technology equipment
EN 55 032 Electromagnetic compatibility of multimedia equipment - Emission requirements
EN 60 555 part 2 and 3 Disturbances of power supply network (part 2) and power fluctuations (part 3) caused by of household appliances and power tools, replaced by EN61000-3-2 and EN61000-3-3
EN 13309 Construction Machinery - Electromagnetic compatibility of machines with internal electrical power supplies
VDE 0875 German EMC directive for broadband interference generated by household appliances
VDE 0871 German EMC directive for broadband and narrowband interference generated by information technology equipment
European standards concerning immunity to electrical emissions
EN 50 082 part1 European immunity standard, part1: Domestic, commercial and light industry environment, replaced by EN61000-6-1
EN 50 082 part2 European immunity standard, part2: industrial environment, replaced by EN61000-6-2
EN 50 093 European, immunity to short dips in the power supply (brownouts)
EN 55 020 European, immunity from radio interference of broadcast receivers
EN 55 024 European immunity requirements for information technology equipment
EN 55 101 older draft of immunity requirements for information technology equipment, replaced by EN 55 024
EN 50 081 part1 European Generic emission standard, part1: Domestic, commercial and light industry environment, replaced by EN61000-6-3
EN 50 081 part2 European immunity requirements for information technology equipment, replaced by EN61000-6-4
American standards
FCC Part 15 regulates unlicensed radio-frequency transmissions, both intentional and unintentional.
FCC Part 15 Subpart A contains a general provision that "devices may not cause interference and must accept interference from other sources."
FCC Part 15 Subpart B US limits and methods of measurement of radio disturbance, measuring radio waves accidentally emitted from devices not specifically designed to emit radio waves ("unintentional"), both directly ("radiated") and indirectly ("conducted")
The rest of FCC Part 15 (subparts C through H) deal with unlicensed devices specifically designed to emit radio waves ("intentional"), such as wireless LAN, cordless telephones, low-power broadcasting, walkie-talkies, etc.
Conducted emissions are regulated from 150 kHz to 30 MHz, and radiated emissions are regulated from 30 MHz and up.
MIL-STD 461 is a US Military Standard addressing EMC for subsystem and components. Currently in revision G, it covers Conducted and Radiated Emissions and Susceptibility.
MIL-STD 464 is a US Military Standard addressing EMC for systems. Currently in revision D, it covers E3 interface requirements and verification criteria of military platforms.
MIL-STD-469, MILITARY STANDARD:This standard establishes the engineering interface requirements to control the electromagnetic emission and susceptibility characteristics of all new military radar equipment and systems operating between 100 megahertz (MHz) and 100 gigahertz (GHz).
References
External links
GR-1089-CORE. Electromagnetic Compatibility and Electrical Safety - Generic Criteria for Network Telecommunications Equipment. Part of the NEBS standards.
Note
Do not forget the DO-160 standard
EMC
Electromagnetic compatibility
EMC test standards
EMC directives | List of common EMC test standards | [
"Engineering"
] | 4,454 | [
"Electrical engineering",
"Electrical-engineering-related lists",
"Electromagnetic compatibility",
"Radio electronics"
] |
43,162,897 | https://en.wikipedia.org/wiki/Advanced%20Optical%20Materials | Advanced Optical Materials is a monthly peer-reviewed scientific journal published by Wiley-VCH. It was established in 2013, after a section with the same name had been published since March 2012 in Advanced Materials. It covers all aspects of light-matter interactions. The founding editor-in-chief is Peter Gregory.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2021 impact factor of 10.050, ranking it 52nd out of 345 journals in the category "Materials Science, Multidisciplinary" and 9th out of 101 journals in the category "Optics".
References
External links
Materials science journals
Optics journals
Academic journals established in 2013
English-language journals
Monthly journals
Wiley-VCH academic journals | Advanced Optical Materials | [
"Materials_science",
"Engineering"
] | 155 | [
"Materials science journals",
"Materials science"
] |
43,163,561 | https://en.wikipedia.org/wiki/Tanaproget | Tanaproget (INN; developmental code names NSP-989, WAY-166989) is an investigational nonsteroidal progestin. It is a high affinity, high efficacy, and very selective agonist of the progesterone receptor (PR). Due to its much more selective binding profile relative to most conventional, steroidal progestins, tanaproget may prove to produce fewer side effects in comparison. As of December 2010, it is in phase II clinical trials in the process of being developed for clinical use as a contraceptive by Ligand Pharmaceuticals.
An analog of tanaproget, 4-fluoropropyltanaproget (18F), has been developed as a radiotracer for imaging of the PR in positron emission tomography.
See also
Finerenone
Mapracorat
Prinaberel
References
Progestogens
Nitriles
Pyrroles
Thiocarbamates | Tanaproget | [
"Chemistry"
] | 200 | [
"Nitriles",
"Functional groups"
] |
23,286,244 | https://en.wikipedia.org/wiki/Yttrium-90 | Yttrium-90 () is a radioactive isotope of yttrium. Yttrium-90 has found a wide range of uses in radiation therapy to treat some forms of cancer. Along with other isotopes of yttrium, it is sometimes called radioyttrium.
Decay
undergoes beta particles emissions/decay (β− decay) to zirconium-90 with a half-life of 64.1 hours and a decay energy of 2.28 MeV with an average beta energy of 0.9336 MeV. It also produces 0.01% 1.7 MeV photons during its decay process to the 0+ state of 90Zr, followed by pair production. The interaction between emitted electrons and matter can lead to the emission of Bremsstrahlung radiation.
Production
Yttrium-90 is produced by the nuclear decay of strontium-90 which has a half-life of nearly 29 years and is a fission product of uranium used in nuclear reactors. As the strontium-90 decays, chemical high-purity separation is used to isolate the yttrium-90 before precipitation. Yttrium-90 is also directly produced by neutron activation of natural yttrium targets (Yttrium is mononuclidic in 89Y) in a nuclear research reactor.
Medical application
90Y plays a significant role in the treatment of hepatocellular carcinoma (HCC), leukemia, and lymphoma, although it has the potential to treat a range of tumors. Trans-arterial radioembolization is a procedure performed by interventional radiologists, in which 90Ymicrospheres are injected into the arteries supplying the tumor. The microspheres come in two forms: resin, in which 90Y is bound to the surface, and glass, in which 90Y is directly incorporated into the microsphere during production. Once injected, the microspheres become lodged in blood vessels surrounding the tumor and the resulting radiation damages the nearby tissue. The distribution of the microspheres is dependent on several factors, including catheter tip positioning, distance to branching vessels, rate of injection, properties of particles, like size and density, and variability in tumor perfusion. Radioembolization with 90Y significantly prolongs time-to-progression (TTP) of HCC, has a tolerable adverse event profile, and improves patient quality of life more than do similar therapies. 90Y has also found uses in tumor diagnosis by imaging the Bremsstrahlung radiation released by the microspheres. Positron emission tomography after radioembolization is also possible.
Post-treatment imaging
Following treatment with 90Y, imaging is performed to evaluate 90Y delivery and absorption to evaluate coverage of target regions and involvement of normal tissue. This is typically performed using Bremsstrahlung imaging with single-photon emission computed tomography CT (SPECT/CT), or using 90Y position imaging with positron emission tomography CT (PET/CT).
Bremsstrahlung imaging after 90Y therapy
As 90Y undergoes beta decay, broad spectrum bremsstrahlung radiation is emitted and is detectable with standard gamma cameras or SPECT. These modalities provide information about radioactive uptake of 90Y, however, there is poor spatial information. Consequently, it is challenging to delineate anatomy and thereby evaluate tumor and normal tissue uptake. This led to the development of SPECT/CT, which combines the functional information of SPECT with the spatial information of CT to allow for more accurate 90Y localization.
Positron imaging after 90Y therapy
PET/CT and PET/MRI have superior spatial resolution compared to SPECT/CT because PET detects positron pairs produced from the decay of emitted positrons, negating the requirement for a physical collimator. This allows for better assessment of microsphere distribution and dose absorption. However, both PET/CT and PET/MRI are less widely available and more costly.
See also
Radionuclide therapy
Selective internal radiation therapy
References
External links
Isotopes of yttrium
Medical isotopes | Yttrium-90 | [
"Chemistry"
] | 850 | [
"Chemicals in medicine",
"Isotopes of yttrium",
"Isotopes",
"Medical isotopes"
] |
23,287,982 | https://en.wikipedia.org/wiki/Samarium-147 | Samarium-147 (147Sm or Sm-147) is an isotope of samarium, making up 15% of natural samarium. It is an extremely long-lived radioisotope, with a half-life of years, although measurements have ranged from to years. It is mainly used in radiometric dating.
Uses
Samarium-147 is used in samarium–neodymium dating. The method of isochron dating is used to find the date at which a rock (or group of rocks) are formed. The Sm-Nd isochron plots the ratio of radiogenic 143Nd to non-radiogenic 144Nd against the ratio of the parent isotope 147Sm to the non-radiogenic isotope 144Nd. 144Nd is used to normalize the radiogenic isotope in the isochron because it is a slightly radioactive and relatively abundant neodymium isotope.
The Sm-Nd isochron is defined by the following equation:
where:
t is the age of the sample,
λ is the decay constant of 147Sm,
(eλt−1) is the slope of the isochron which defines the age of the system.
Alternatively, one can assume that the material formed from mantle material which was following the same path of evolution of these ratios as chondrites, and then again the time of formation can be calculated (see Samarium–neodymium dating#The CHUR model).
See also
Isotopes of samarium
References
Samarium-147
Radionuclides used in radiometric dating | Samarium-147 | [
"Chemistry"
] | 312 | [
"Radionuclides used in radiometric dating",
"Isotopes of samarium",
"Isotopes"
] |
23,288,439 | https://en.wikipedia.org/wiki/Gold-198 | Gold-198 (198Au) is a radioactive isotope of gold. It undergoes beta decay to stable 198Hg with a half-life of 2.69464 days.
The decay properties of 198Au have led to widespread interest in its potential use in radiotherapy for cancer treatments. This isotope has also found use in nuclear weapons research and as a radioactive tracer in hydrological research.
Discovery
198Au was possibly observed for the first time in 1935 by Enrico Fermi et al., though it was not correctly identified at the time. This isotope was conclusively identified in 1937 following neutron irradiation of stable 197Au and was ascribed a half-life of approximately 2.7 days.
Applications
Nuclear medicine
198Au is used for radiotherapy in some cancer treatments.
Its half-life and beta decay energy are favorable for use in medicine because its 4 mm penetration range in tissue allows it to destroy tumors without nearby non-cancerous tissue being affected by radiation. For this reason, 198Au nanoparticles are being investigated as an injectable treatment for prostate cancer.
Radioactive tracing
Sediment and water flow can be investigated using radioactive tracers such as 198Au. This has been used extensively since artificial radioisotopes became available in the 1950s, as a supplement to millennia of investigations using other tracing techniques.
Inside coker units at oil refineries, 198Au is used to study the hydrodynamic behavior of solids in fluidized beds and can also be used to quantify the degree of fouling of bed internals.
Nuclear weapons
Gold has been proposed as a material for creating a salted nuclear weapon (cobalt is another, better-known salting material). A jacket of natural (the only stable gold isotope), irradiated by the intense high-energy neutron flux from an exploding thermonuclear weapon, would transmute into the radioactive isotope 198Au with a half-life of 2.697 days and produce approximately 0.411 MeV of gamma radiation, significantly increasing the radioactivity of the weapon's fallout for several days. Such a weapon is not known to have ever been built, tested, or used.
The highest amount of 198Au detected in any United States nuclear test was in shot "Sedan" detonated at Nevada Test Site on July 6, 1962.
See also
Isotopes of gold
References
Gold-198
Medical isotopes | Gold-198 | [
"Chemistry"
] | 483 | [
"Isotopes of gold",
"Medical isotopes",
"Isotopes",
"Chemicals in medicine"
] |
23,289,685 | https://en.wikipedia.org/wiki/Moo%20box | The moo box or moo can is a toy or a souvenir, also used as a hearing test. When turned upside down, it produces a noise that resembles the mooing of a cow. The toy can be configured to create other animal sounds such as the meow of a cat, the chirp of a bird, or the bleat of a sheep.
Construction
The moo box consists of a block and a bellows. The bellows is sealed to the bottom of the box and to the block. The block is heavy and perforated, and used to actuate the bellows, producing the sound.
When the box is inverted, the block falls away from the bottom, filling the bellows with air. When the box is turned right side up, the air is expelled through a vibrating blade (which makes it a free reed instrument) producing the sound. After passing the blade, the air passes through a duct of variable length, which determines the pitch of the sound.
Moatti test
The toy can be used to perform the Moatti test to test infants' hearing at different frequencies, conceived by doctor Lucien Moatti. It uses four boxes at different frequencies, all calibrated to generate a sound pressure level (loudness) of sixty decibels at two metres. The test can be used to screen the hearing of children aged from six to 24 months.
The tester knocks down the boxes out of sight of the child. If the child hears the sound, they will turn their head towards it.
Notable appearances in pop culture
A moo box was used in the Beastie Boys' track "B-boys Makin' With The Freak Freak", from their 1994 album Ill Communication.
Michael Scott confuses a radon detector for a moo box in the cold open of The Office episode, "The Chump".
In the French film Les Couloirs du temps : Les Visiteurs II (1998), a man from the Modern Day is accidentally transported to the Middle Ages and brought with him souvenirs from the future. While being interrogated by villagers, one of them grabs a moo box and believe the toy to be a work of witchcraft to which they have him tied to a stake to be burned.
The Kube brothers make moo boxes in the post-apocalyptic French film Delicatessen (1991), directed by Jean-Pierre Jeunet and Marc Caro.
In the THX Tex 2: Moo Can trailer (better known as just Tex 2), a robot mascot named Tex of the American audio company THX uses a moo can to perform a Deep Note mooed by cows. This trailer first premiered with the original theatrical release of Alien Resurrection in November 1997. It was seen on Pixar and 20th Century Fox DVDs (1997-2005).
In Invader Zim, the secretary for the school nurse is seen using the toy in the controversial episode "Dark Harvest".
In the 2005 film Constantine the main character, John Constantine, trades a moo box designed to imitate the bleat of a sheep with his friend and ally Beeman in exchange for holy objects and weapons. Later, after Beeman's death, many moo boxes can be seen in Beeman's office in the bowling alley, indicating he was a collector.
Moo Can was seen in the Despicable Me trailer, where two Minions are playing with a toy.
See also
Groan Tube
References
Mechanical toys
Toy instruments and noisemakers | Moo box | [
"Physics",
"Technology"
] | 712 | [
"Physical systems",
"Machines",
"Mechanical toys"
] |
23,290,197 | https://en.wikipedia.org/wiki/CSS | Cascading Style Sheets (CSS) is a style sheet language used for specifying the presentation and styling of a document written in a markup language such as HTML or XML (including XML dialects such as SVG, MathML or XHTML). CSS is a cornerstone technology of the World Wide Web, alongside HTML and JavaScript.
CSS is designed to enable the separation of content and presentation, including layout, colors, and fonts. This separation can improve content accessibility, since the content can be written without concern for its presentation; provide more flexibility and control in the specification of presentation characteristics; enable multiple web pages to share formatting by specifying the relevant CSS in a separate .css file, which reduces complexity and repetition in the structural content; and enable the .css file to be cached to improve the page load speed between the pages that share the file and its formatting.
Separation of formatting and content also makes it feasible to present the same markup page in different styles for different rendering methods, such as on-screen, in print, by voice (via speech-based browser or screen reader), and on Braille-based tactile devices. CSS also has rules for alternate formatting if the content is accessed on a mobile device.
The name cascading comes from the specified priority scheme to determine which declaration applies if more than one declaration of a property match a particular element. This cascading priority scheme is predictable.
The CSS specifications are maintained by the World Wide Web Consortium (W3C). Internet media type (MIME type) text/css is registered for use with CSS by RFC 2318 (March 1998). The W3C operates a free CSS validation service for CSS documents.
In addition to HTML, other markup languages support the use of CSS including XHTML, plain XML, SVG, and XUL. CSS is also used in the GTK widget toolkit.
Syntax
CSS has a simple syntax and uses a number of English keywords to specify the names of various style properties.
Style sheet
A style sheet consists of a list of rules. Each rule or rule-set consists of one or more selectors, and a declaration block.
Selector
In CSS, selectors declare which part of the markup a style applies to by matching tags and attributes in the markup itself.
Selector types
Selectors may apply to the following:
all elements of a specific type, e.g. the second-level headers h2
elements specified by attribute, in particular:
id: an identifier unique within the document, denoted in the selector language by a hash prefix e.g.
class: an identifier that can annotate multiple elements in a document, denoted by a dot prefix e.g. (the phrase "CSS class", although sometimes used, is a misnomer, as element classes—specified with the HTML class attribute—is a markup feature that is distinct from browsers' CSS subsystem and the related W3C/WHATWG standards work on document styles; see RDF and microformats for the origins of the "class" system of the Web content model)
elements depending on how they are placed relative to others in the document tree.
Classes and IDs are case-sensitive, start with letters, and can include alphanumeric characters, hyphens, and underscores. A class may apply to any number of instances of any element. An ID may only be applied to a single element.
Pseudo-classes
Pseudo-classes are used in CSS selectors to permit formatting based on information that is not contained in the document tree.
One example of a widely used pseudo-class is , which identifies content only when the user "points to" the visible element, usually by holding the mouse cursor over it. It is appended to a selector as in or .
A pseudo-class classifies document elements, such as or , whereas a pseudo-element makes a selection that may consist of partial elements, such as or . Note the distinction between the double-colon notation used for pseudo-elements and the single-colon notation used for pseudo-classes.
Combinators
Multiple simple selectors may be joined using combinators to specify elements by location, element type, id, class, or any combination thereof. The order of the selectors is important. For example, div .myClass {color: red;} applies to all elements of class myClass that are inside div elements, whereas .myClass div {color: red;} applies to all div elements that are inside elements of class myClass. This is not to be confused with concatenated identifiers such as div.myClass {color: red;} which applies to div elements of class myClass.
Summary of selector syntax
The following table provides a summary of selector syntax indicating usage and the version of CSS that introduced it.
Declaration block
A declaration block consists of a pair of braces ({}) enclosing a semicolon-separated list of declarations.
Declaration
Each declaration itself consists of a property, a colon (:), and a value. Optional white-space may be around the declaration block, declarations, colons, and semi-colons for readability.
Properties
Properties are specified in the CSS standard. Each property has a set of possible values. Some properties can affect any type of element, and others apply only to particular groups of elements.
Values
Values may be keywords, such as "center" or "inherit", or numerical values, such as (200 pixels), (50 percent of the viewport width) or (80 percent of the parent element's width).
Color values can be specified with keywords (e.g. ""), hexadecimal values (e.g. , also abbreviated as ), RGB values on a 0 to 255 scale (e.g. ), RGBA values that specify both color and alpha transparency (e.g. ), or HSL or HSLA values (e.g. , ).
Non-zero numeric values representing linear measures must include a length unit, which is either an alphabetic code or abbreviation, as in 200px or 50vw; or a percentage sign, as in 80%. Some units – cm (centimetre); in (inch); mm (millimetre); pc (pica); and pt (point) – are absolute, which means that the rendered dimension does not depend upon the structure of the page; others – em (em); ex (ex) and px (pixel) – are relative, which means that factors such as the font size of a parent element can affect the rendered measurement. These eight units were a feature of CSS 1 and retained in all subsequent revisions. The proposed CSS Values and Units Module Level 3 will, if adopted as a W3C Recommendation, provide seven further length units: ch; Q; rem; vh; vmax; vmin; and vw.
Use
Before CSS, nearly all presentational attributes of HTML documents were contained within the HTML markup. All font colors, background styles, element alignments, borders, and sizes had to be explicitly described, often repeatedly, within the HTML. CSS lets authors move much of that information to another file, the style sheet, resulting in considerably simpler HTML. And additionally, as more and more devices are able to access responsive web pages, different screen sizes and layouts begin to appear. Customizing a website for each device size is costly and increasingly difficult. The modular nature of CSS means that styles can be reused in different parts of a site or even across sites, promoting consistency and efficiency.
For example, headings (h1 elements), sub-headings (h2), sub-sub-headings (h3), etc., are defined structurally using HTML. In print and on the screen, choice of font, size, color and emphasis for these elements is presentational.
Before CSS, document authors who wanted to assign such typographic characteristics to, say, all h2 headings had to repeat HTML presentational markup for each occurrence of that heading type. This made documents more complex, larger, and more error-prone and difficult to maintain. CSS allows the separation of presentation from structure. CSS can define color, font, text alignment, size, borders, spacing, layout and many other typographic characteristics, and can do so independently for on-screen and printed views. CSS also defines non-visual styles, such as reading speed and emphasis for aural text readers. The W3C has now deprecated the use of all presentational HTML markup.
For example, under pre-CSS HTML, a heading element defined with red text would be written as:
<h1><font color="red">Chapter 1.</font></h1>
Using CSS, the same element can be coded using style properties instead of HTML presentational attributes:
<h1 style="color: red;">Chapter 1.</h1>
The advantages of this may not be immediately clear but the power of CSS becomes more apparent when the style properties are placed in an internal style element or, even better, an external CSS file. For example, suppose the document contains the style element:
<style>
h1 {
color: red;
}
</style>
All h1 elements in the document will then automatically become red without requiring any explicit code. If the author later wanted to make h1 elements blue instead, this could be done by changing the style element to:
<style>
h1 {
color: blue;
}
</style>
rather than by laboriously going through the document and changing the color for each individual h1 element.
The styles can also be placed in an external CSS file, as described below, and loaded using syntax similar to:
<link href="path/to/file.css" rel="stylesheet" type="text/css">
This further decouples the styling from the HTML document and makes it possible to restyle multiple documents by simply editing a shared external CSS file.
Sources
CSS, or Cascading Style Sheets, offers a flexible way to style web content, with styles originating from browser defaults, user preferences, or web designers. These styles can be applied inline, within an HTML document, or through external .css files for broader consistency. Not only does this simplify web development by promoting reusability and maintainability, it also improves site performance because styles can be offloaded into dedicated .css files that browsers can cache. Additionally, even if the styles cannot be loaded or are disabled, this separation maintains the accessibility and readability of the content, ensuring that the site is usable for all users, including those with disabilities. Its multi-faceted approach, including considerations for selector specificity, rule order, and media types, ensures that websites are visually coherent and adaptive across different devices and user needs, striking a balance between design intent and user accessibility.
Multiple style sheets
Multiple style sheets can be imported. Different styles can be applied depending on the output device being used; for example, the screen version can be quite different from the printed version, so authors can tailor the presentation appropriately for each medium.
Cascading
The style sheet with the highest priority controls the content display. Declarations not set in the highest priority source are passed on to a source of lower priority, such as the user agent style. The process is called cascading.
One of the goals of CSS is to allow users greater control over presentation. Someone who finds red italic headings difficult to read may apply a different style sheet. Depending on the browser and the website, a user may choose from various style sheets provided by the designers, or may remove all added styles, and view the site using the browser's default styling, or may override just the red italic heading style without altering other attributes. Browser extensions like Stylish and Stylus have been created to facilitate the management of such user style sheets. In the case of large projects, cascading can be used to determine which style has a higher priority when developers do integrate third-party styles that have conflicting priorities, and to further resolve those conflicts. Additionally, cascading can help create themed designs, which help designers fine-tune aspects of a design without compromising the overall layout.
CSS priority scheme
Specificity
Specificity refers to the relative weights of various rules. It determines which styles apply to an element when more than one rule could apply. Based on the specification, a simple selector (e.g. H1) has a specificity of 1, class selectors have a specificity of 1,0, and ID selectors have a specificity of 1,0,0. Because the specificity values do not carry over as in the decimal system, commas are used to separate the "digits" (a CSS rule having 11 elements and 11 classes would have a specificity of 11,11, not 121).
Thus the selectors of the following rule result in the indicated specificity:
Examples
Consider this HTML fragment:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<style>
#xyz { color: blue; }
</style>
</head>
<body>
<p id="xyz" style="color: green;">To demonstrate specificity</p>
</body>
</html>
In the above example, the declaration in the style attribute overrides the one in the <style> element because it has a higher specificity, and thus, the paragraph appears green:
To demonstrate specificity
Inheritance
Inheritance is a key feature in CSS; it relies on the ancestor-descendant relationship to operate. Inheritance is the mechanism by which properties are applied not only to a specified element but also to its descendants. Inheritance relies on the document tree, which is the hierarchy of XHTML elements in a page based on nesting. Descendant elements may inherit CSS property values from any ancestor element enclosing them.
In general, descendant elements inherit text-related properties, but their box-related properties are not inherited. Properties that can be inherited are color, font, letter spacing, line-height, list-style, text-align, text-indent, text-transform, visibility, white-space, and word-spacing. Properties that cannot be inherited are background, border, display, float and clear, height, and width, margin, min- and max-height and -width, outline, overflow, padding, position, text-decoration, vertical-align, and z-index.
Inheritance can be used to avoid declaring certain properties over and over again in a style sheet, allowing for shorter CSS.
Inheritance in CSS is not the same as inheritance in class-based programming languages, where it is possible to define class B as "like class A, but with modifications". With CSS, it is possible to style an element with "class A, but with modifications". However, it is not possible to define a CSS class B like that, which could then be used to style multiple elements without having to repeat the modifications.
Example
Given the following style sheet:
p {
color: pink;
}
Suppose there is a p element with an emphasizing element () inside:
<p>
This is to <em>illustrate</em> inheritance
</p>
If no color is assigned to the em element, the emphasized word "illustrate" inherits the color of the parent element, p. The style sheet p has the color pink, hence, the em element is likewise pink:
This is to illustrate inheritance
Whitespace
The whitespace between properties and selectors is ignored. This code snippet:
body{overflow:hidden;background:#000000;background-image:url(images/bg.gif);background-repeat:no-repeat;background-position:left top;}
is functionally equivalent to this one:
body {
overflow: hidden;
background-color: #000000;
background-image: url(images/bg.gif);
background-repeat: no-repeat;
background-position: left top;
}
Indentation
One common way to format CSS for readability is to indent each property and give it its own line. In addition to formatting CSS for readability, shorthand properties can be used to write out the code faster, which also gets processed more quickly when being rendered:
body {
overflow: hidden;
background: #000 url(images/bg.gif) no-repeat left top;
}Sometimes, multiple property values are indented onto their own line:@font-face {
font-family: 'Comic Sans';
font-size: 20px;
src: url('first.example.com'),
url('second.example.com'),
url('third.example.com'),
url('fourth.example.com');
}
Positioning
CSS 2.1 defines three positioning schemes:
Normal flow Inline items are laid out in the same way as the letters in words in the text, one after the other across the available space until there is no more room, then starting a new line below. Block items stack vertically, like paragraphs and like the items in a bulleted list. Normal flow also includes the relative positioning of block or inline items and run-in boxes.
Floats A floated item is taken out of the normal flow and shifted to the left or right as far as possible in the space available. Other content then flows alongside the floated item.
Absolute positioning An absolutely positioned item has no place in, and no effect on, the normal flow of other items. It occupies its assigned position in its container independently of other items.
Position property
There are five possible values of the position property. If an item is positioned in any way other than static, then the further properties top, bottom, left, and right are used to specify offsets and positions.The element having position static is not affected by the top, bottom , left or right properties.
Static
The default value places the item in the normal flow.
Relative
The item is placed in the normal flow, and then shifted or offset from that position. Subsequent flow items are laid out as if the item had not been moved.
Absolute
Specifies absolute positioning. The element is positioned in relation to its nearest non-static ancestor.
Fixed
The item is absolutely positioned in a fixed position on the screen even as the rest of the document is scrolled
Float and clear
The property may have one of three values. Absolutely positioned or fixed items cannot be floated. Other elements normally flow around floated items, unless they are prevented from doing so by their property.
left The item floats to the left of the line that it would have appeared in; other items may flow around its right side.
right The item floats to the right of the line that it would have appeared in; other items may flow around its left side.
clear Forces the element to appear underneath ('clear') floated elements to the left (), right () or both sides ().
History
CSS was first proposed by Håkon Wium Lie on 10 October 1994. At the time, Lie was working with Tim Berners-Lee at CERN. Several other style sheet languages for the web were proposed around the same time, and discussions on public mailing lists and inside World Wide Web Consortium resulted in the first W3C CSS Recommendation (CSS1) being released in 1996. In particular, a proposal by Bert Bos was influential; he became co-author of CSS1, and is regarded as co-creator of CSS.
Style sheets have existed in one form or another since the beginnings of Standard Generalized Markup Language (SGML) in the 1980s, and CSS was developed to provide style sheets for the web. One requirement for a web style sheet language was for style sheets to come from different sources on the web. Therefore, existing style sheet languages like DSSSL and FOSI were not suitable. CSS, on the other hand, let a document's style be influenced by multiple style sheets by way of "cascading" styles.
As HTML grew, it came to encompass a wider variety of stylistic capabilities to meet the demands of web developers. This evolution gave the designer more control over site appearance, at the cost of more complex HTML. Variations in web browser implementations, such as ViolaWWW and WorldWideWeb, made consistent site appearance difficult, and users had less control over how web content was displayed. The browser/editor developed by Tim Berners-Lee had style sheets that were hard-coded into the program. The style sheets could therefore not be linked to documents on the web. Robert Cailliau, also of CERN, wanted to separate the structure from the presentation so that different style sheets could describe different presentation for printing, screen-based presentations, and editors.
Improving web presentation capabilities was a topic of interest to many in the web community and nine different style sheet languages were proposed on the www-style mailing list. Of these nine proposals, two were especially influential on what became CSS: Cascading HTML Style Sheets and Stream-based Style Sheet Proposal (SSP). Two browsers served as testbeds for the initial proposals; Lie worked with Yves Lafon to implement CSS in Dave Raggett's Arena browser. Bert Bos implemented his own SSP proposal in the Argo browser. Thereafter, Lie and Bos worked together to develop the CSS standard (the 'H' was removed from the name because these style sheets could also be applied to other markup languages besides HTML).
Lie's proposal was presented at the "Mosaic and the Web" conference (later called WWW2) in Chicago, Illinois in 1994, and again with Bert Bos in 1995. Around this time the W3C was already being established and took an interest in the development of CSS. It organized a workshop toward that end chaired by Steven Pemberton. This resulted in W3C adding work on CSS to the deliverables of the HTML editorial review board (ERB). Lie and Bos were the primary technical staff on this aspect of the project, with additional members, including Thomas Reardon of Microsoft, participating as well. In August 1996, Netscape Communication Corporation presented an alternative style sheet language called JavaScript Style Sheets (JSSS). The spec was never finished, and is deprecated. By the end of 1996, CSS was ready to become official, and the CSS level 1 Recommendation was published in December.
Development of HTML, CSS, and the DOM had all been taking place in one group, the HTML Editorial Review Board (ERB). Early in 1997, the ERB was split into three working groups: HTML Working Group, chaired by Dan Connolly of W3C; DOM Working group, chaired by Lauren Wood of SoftQuad; and CSS Working Group, chaired by Chris Lilley of W3C.
The CSS Working Group began tackling issues that had not been addressed with CSS level 1, resulting in the creation of CSS level 2 on November 4, 1997. It was published as a W3C Recommendation on May 12, 1998. CSS level 3, which was started in 1998, is still under development .
In 2005, the CSS Working Groups decided to enforce the requirements for standards more strictly. This meant that already published standards like CSS 2.1, CSS 3 Selectors, and CSS 3 Text were pulled back from Candidate Recommendation to Working Draft level.
Difficulty with adoption
The CSS 1 specification was completed in 1996. Microsoft's Internet Explorer 3 was released that year, featuring some limited support for CSS. IE 4 and Netscape 4.x added more support, but it was typically incomplete and had many bugs that prevented CSS from being usefully adopted. It was more than three years before any web browser achieved near-full implementation of the specification. Internet Explorer 5.0 for the Macintosh, shipped in March 2000, was the first browser to have full (better than 99 percent) CSS 1 support, surpassing Opera, which had been the leader since its introduction of CSS support fifteen months earlier. Other browsers followed soon afterward, and many of them additionally implemented parts of CSS 2.
However, even when later "version 5" web browsers began to offer a fairly full implementation of CSS, they were still incorrect in certain areas. They were fraught with inconsistencies, bugs, and other quirks. Microsoft Internet Explorer 5. x for Windows, as opposed to the very different IE for Macintosh, had a flawed implementation of the CSS box model, as compared with the CSS standards. Such inconsistencies and variation in feature support made it difficult for designers to achieve a consistent appearance across browsers and platforms without the use of workarounds termed CSS hacks and filters. The IE Windows box model bugs were so serious that, when Internet Explorer 6 was released, Microsoft introduced a backward-compatible mode of CSS interpretation ("quirks mode") alongside an alternative, corrected "standards mode". Other non-Microsoft browsers also provided mode-switch capabilities. It, therefore, became necessary for authors of HTML files to ensure they contained special distinctive "standards-compliant CSS intended" marker to show that the authors intended CSS to be interpreted correctly, in compliance with standards, as opposed to being intended for the now long-obsolete IE5/Windows browser. Without this marker, web browsers with the "quirks mode"-switching capability will size objects in web pages as IE 5 on Windows would, rather than following CSS standards.
Problems with the patchy adoption of CSS and errata in the original specification led the W3C to revise the CSS 2 standards into CSS 2.1, which moved nearer to a working snapshot of current CSS support in HTML browsers. Some CSS 2 properties that no browser successfully implemented were dropped, and in a few cases, defined behaviors were changed to bring the standard into line with the predominant existing implementations. CSS 2.1 became a Candidate Recommendation on February 25, 2004, but CSS 2.1 was pulled back to Working Draft status on June 13, 2005, and only returned to Candidate Recommendation status on July 19, 2007.
In addition to these problems, the .css extension was used by a software product used to convert PowerPoint files into Compact Slide Show files,
so some web servers served all .css as MIME type application/x-pointplus rather than text/css.
Vendor prefixes
Individual browser vendors occasionally introduced new parameters ahead of standardization and universalization. To prevent interfering with future implementations, vendors prepended unique names to the parameters, such as -moz- for Mozilla Firefox, -webkit- named after the browsing engine of Apple Safari, -o- for Opera Browser and -ms- for Microsoft Internet Explorer and early versions of Microsoft Edge that use EdgeHTML.
Occasionally, the parameters with vendor prefixes such as -moz-radial-gradient and -webkit-linear-gradient have slightly different syntax as compared to their non-vendor-prefix counterparts.
Prefixed properties are rendered obsolete by the time of standardization. Programs are available to automatically add prefixes for older browsers and to point out standardized versions of prefixed parameters. Since prefixes are limited to a small subset of browsers, removing the prefix allows other browsers to see the functionality. An exception is certain obsolete -webkit- prefixed properties, which are so common and persistent on the web that other families of browsers have decided to support them for compatibility.
CSS has various levels and profiles. Each level of CSS builds upon the last, typically adding new features and typically denoted as CSS 1, CSS 2, CSS 3, and CSS 4. Profiles are typically a subset of one or more levels of CSS built for a particular device or user interface. Currently, there are profiles for mobile devices, printers, and television sets. Profiles should not be confused with media types, which were added in CSS 2.
CSS 1
The first CSS specification to become an official W3C Recommendation is CSS level 1, published on 17 December 1996. Håkon Wium Lie and Bert Bos are credited as the original developers. Among its capabilities are support for
Font properties such as typeface and emphasis
Color of text, backgrounds, and other elements
Text attributes such as spacing between words, letters, and lines of text
Alignment of text, images, tables and other elements
Margin, border, padding, and positioning for most elements
Unique identification and generic classification of groups of attributes
The W3C no longer maintains the CSS 1 Recommendation.
CSS 2
CSS level 2 specification was developed by the W3C and published as a recommendation in May 1998. A superset of CSS 1, CSS 2 includes a number of new capabilities like absolute, relative, and fixed positioning of elements and z-index, the concept of media types, support for aural style sheets (which were later replaced by the CSS 3 speech modules) and bidirectional text, and new font properties such as shadows.
The W3C no longer maintains the CSS 2 recommendation.
CSS 2.1
CSS level 2 revision 1, often referred to as "CSS 2.1", fixes errors in CSS 2, removes poorly supported or not fully interoperable features and adds already implemented browser extensions to the specification. To comply with the W3C Process for standardizing technical specifications, CSS 2.1 went back and forth between Working Draft status and Candidate Recommendation status for many years. CSS 2.1 first became a Candidate Recommendation on 25 February 2004, but it was reverted to a Working Draft on 13 June 2005 for further review. It returned to Candidate Recommendation on 19 July 2007 and then updated twice in 2009. However, because changes and clarifications were made, it again went back to Last Call Working Draft on 7 December 2010.
CSS 2.1 went to Proposed Recommendation on 12 April 2011. After being reviewed by the W3C Advisory Committee, it was finally published as a W3C Recommendation on 7 June 2011.
CSS 2.1 was planned as the first and final revision of level 2—but low-priority work on CSS 2.2 began in 2015.
CSS 3
Unlike CSS 2, which is a large single specification defining various features, CSS 3 is divided into several separate documents called "modules". Each module adds new capabilities or extends features defined in CSS 2, preserving backward compatibility. Work on CSS level 3 started around the time of publication of the original CSS 2 recommendation. The earliest CSS 3 drafts were published in June 1999.
Due to the modularization, different modules have different stability and statuses.
Some modules have Candidate Recommendation (CR) status and are considered moderately stable. At CR stage, implementations are advised to drop vendor prefixes.
CSS 4
There is no single, integrated CSS4 specification, because the specification has been split into many separate modules which level independently.
Modules that build on things from CSS Level 2 started at Level 3. Some of them have already reached Level 4 or are already approaching Level 5. Other modules that define entirely new functionality, such as Flexbox, have been designated as Level 1 and some of them are approaching Level 2.
The CSS Working Group sometimes publishes "Snapshots", a collection of whole modules and parts of other drafts that are considered stable enough to be implemented by browser developers. So far, five such "best current practices" documents have been published as Notes, in 2007, 2010, 2015, 2017, and 2018.
Since these specification snapshots are primarily intended for developers, there has been a growing demand for a similar versioned reference document targeted at authors, which would present the state of interoperable implementations as meanwhile documented by sites like Can I Use... and the MDN Web Docs. A W3C Community Group has been established in early 2020 in order to discuss and define such a resource. The actual kind of versioning is also up to debate, which means that the document, once produced, might not be called "CSS4".
Browser support
Each web browser uses a layout engine to render web pages, and support for CSS functionality is not consistent between them. Because browsers do not parse CSS perfectly, multiple coding techniques have been developed to target specific browsers with workarounds (commonly known as CSS hacks or CSS filters). The adoption of new functionality in CSS can be hindered by a lack of support in major browsers. For example, Internet Explorer was slow to add support for many CSS 3 features, which slowed the adoption of those features and damaged the browser's reputation among developers. Additionally, a proprietary syntax for the non-vendor-prefixed filter property was used in some versions. In order to ensure a consistent experience for their users, web developers often test their sites across multiple operating systems, browsers, and browser versions, increasing development time and complexity. Tools such as BrowserStack have been built to reduce the complexity of maintaining these environments.
In addition to these testing tools, many sites maintain lists of browser support for specific CSS properties, including CanIUse and the MDN Web Docs. Additionally, CSS 3 defines feature queries, which provide an @supports directive that will allow developers to target browsers with support for certain functionality directly within their CSS. CSS that is not supported by older browsers can also sometimes be patched in using JavaScript polyfills, which are pieces of JavaScript code designed to make browsers behave consistently. These workarounds—and the need to support fallback functionality—can add complexity to development projects, and consequently, companies frequently define a list of browser versions that they will and will not support.
As websites adopt newer code standards that are incompatible with older browsers, these browsers can be cut off from accessing many of the resources on the web (sometimes intentionally). Many of the most popular sites on the internet are not just visually degraded on older browsers due to poor CSS support but do not work at all, in large part due to the evolution of JavaScript and other web technologies.
Limitations
Some noted limitations of the current capabilities of CSS include:
Cannot explicitly declare new scope independently of position
Scoping rules for properties such as z-index look for the closest parent element with a position: absolute or position: relative attribute. This odd coupling has undesired effects. For example, it is impossible to avoid declaring a new scope when one is forced to adjust an element's position, preventing one from using the desired scope of a parent element.
Pseudo-class dynamic behavior not controllable
CSS implements pseudo-classes that allow a degree of user feedback by conditional application of alternate styles. One CSS pseudo-class, "", is dynamic (equivalent of JavaScript "onmouseover") and has potential for misuse (e.g., implementing cursor-proximity popups), but CSS has no ability for a client to disable it (no "disable"-like property) or limit its effects (no "nochange"-like values for each property).
Cannot name rules
There is no way to name a CSS rule, which would allow (for example) client-side scripts to refer to the rule even if its selector changes.
Cannot include styles from a rule into another rule
CSS styles often must be duplicated in several rules to achieve the desired effect, causing additional maintenance and requiring more thorough testing. Some new CSS features were proposed to solve this but were abandoned afterward. Instead, authors may gain this ability by using more sophisticated stylesheet languages which compile to CSS, such as Sass, Less, or Stylus.
Cannot target specific text without altering markup
Besides the pseudo-element, one cannot target specific ranges of text without needing to utilize placeholder elements.
Advantages
Separation of content from presentation
CSS facilitates the publication of content in multiple presentation formats by adjusting styles based on various nominal parameters. These parameters include explicit user preferences (such as themes or font size), compatibility with different web browsers, the type of device used to view the content (e.g., desktop, tablet, or mobile device), screen resolutions, the geographic location of the user, and many other variables. CSS also enables responsive design, ensuring that content dynamically adapts to different screen sizes and orientations, enhancing accessibility and user experience across a wide range of environments.
Site-wide consistency
When CSS is used effectively, in terms of inheritance and "cascading", a global style sheet can be used to affect and style elements site-wide. If the situation arises that the styling of the elements should be changed or adjusted, these changes can be made by editing rules in the global style sheet. Before CSS, this sort of maintenance was more difficult, expensive, and time-consuming.
Bandwidth
A stylesheet, internal or external, specifies the style once for a range of HTML elements selected by class, type or relationship to others. This is much more efficient than repeating style information inline for each occurrence of the element. An external stylesheet is usually stored in the browser cache, and can therefore be used on multiple pages without being reloaded, further reducing data transfer over a network.
Page reformatting
With a simple change of one line, a different style sheet can be used for the same page. This has advantages for accessibility, as well as providing the ability to tailor a page or site to different target devices. Furthermore, devices not able to understand the styling still display the content.
Accessibility
Without CSS, web designers must typically lay out their pages with techniques such as HTML tables that hinder accessibility for vision-impaired users (see ).
Standardization
Frameworks
CSS frameworks are prepared libraries that are meant to allow for easier, more standards-compliant styling of web pages using the Cascading Style Sheets language. CSS frameworks include Blueprint, Bootstrap, Foundation and Materialize. Like programming and scripting language libraries, CSS frameworks are usually incorporated as external .css sheets referenced in the HTML . They provide a number of ready-made options for designing and laying out the web page. Although many of these frameworks have been published, some authors use them mostly for rapid prototyping, or for learning from, and prefer to 'handcraft' CSS that is appropriate to each published site without the design, maintenance and download overhead of having many unused features in the site's styling.
Design methodologies
As the size of CSS resources used in a project increases, a development team often needs to decide on a common design methodology to keep them organized. The goals are ease of development, ease of collaboration during development, and performance of the deployed stylesheets in the browser. Popular methodologies include OOCSS (object-oriented CSS), ACSS (atomic CSS), CSS (organic Cascade Style Sheet), SMACSS (scalable and modular architecture for CSS), and BEM (block, element, modifier).
See also
Flash of unstyled content
References
Further reading
MDN CSS reference
MDN Getting Started with CSS
External links
Internet properties established in 1996
Stylesheet languages
Typesetting programming languages
Web design
World Wide Web Consortium standards
Open formats | CSS | [
"Engineering"
] | 8,243 | [
"Design",
"Web design"
] |
23,290,471 | https://en.wikipedia.org/wiki/Group%2011%20element | |-
! colspan=2 style="text-align:left;" | ↓ Period
|-
! 4
|
|-
! 5
|
|-
! 6
|
|-
! 7
|
|-
| colspan="2"|
Legend
|}
Group 11, by modern IUPAC numbering, is a group of chemical elements in the periodic table, consisting of copper (Cu), silver (Ag), gold (Au), and roentgenium (Rg), although no chemical experiments have yet been carried out to confirm that roentgenium behaves like the heavier homologue to gold. Group 11 is also known as the coinage metals, due to their usage in minting coins—while the rise in metal prices mean that silver and gold are no longer used for circulating currency, remaining in use for bullion, copper remains a common metal in coins to date, either in the form of copper clad coinage or as part of the cupronickel alloy. They were most likely the first three elements discovered. Copper, silver, and gold all occur naturally in elemental form.
History
All three stable elements of the group have been known since prehistoric times, as all of them occur in metallic form in nature and no extraction metallurgy is necessary to produce them.
Copper was known and used around 4000 BC and many items, weapons and materials were made and used with copper.
The first evidence of silver mining dates back to 3000 BC, in Turkey and Greece, according to the RSC. Ancient people even figured out how to refine silver.
The earliest recorded metal employed by humans appears to be gold, which can be found free or "native". Small amounts of natural gold have been found in Spanish caves used during the late Paleolithic period, c. 40,000 BC. Gold artifacts made their first appearance at the very beginning of the pre-dynastic period in Egypt, at the end of the fifth millennium BC and the start of the fourth, and smelting was developed during the course of the 4th millennium BC; gold artifacts appear in the archeology of Lower Mesopotamia during the early 4th millennium BC.
Roentgenium was made in 1994 by bombarding nickel-64 atoms into bismuth-209 to make roentgenium-272.
Characteristics
Like other groups, the members of this family show patterns in electron configuration, especially in the outermost shells, resulting in trends in chemical behavior, although roentgenium is probably an exception:
All group 11 elements are relatively inert, corrosion-resistant metals. Copper and gold are colored, but silver is not. Roentgenium is expected to be silvery, though it has not been produced in large enough amounts to confirm this.
These elements have low electrical resistivity so they are used for wiring. Copper is the cheapest and most widely used. Bond wires for integrated circuits are usually gold. Silver and silver-plated copper wiring are found in some special applications.
Occurrence
Copper occurs in its native form in Chile, China, Mexico, Russia and the USA. Various natural ores of copper are: copper pyrites (CuFeS2), cuprite or ruby copper (Cu2O), copper glance (Cu2S), malachite (Cu(OH)2CuCO3), and azurite (Cu(OH)22CuCO3).
Copper pyrite is the principal ore, and yields nearly 76% of the world production of copper.
Production
Silver is found in native form, as an alloy with gold (electrum), and in ores containing sulfur, arsenic, antimony or chlorine. Ores include argentite (Ag2S), chlorargyrite (AgCl) which includes horn silver, and pyrargyrite (Ag3SbS3). Silver is extracted using the Parkes process.
Applications
These metals, especially silver, have unusual properties that make them essential for industrial applications outside of their monetary or decorative value. They are all excellent conductors of electricity. The most conductive (by volume) of all metals are silver, copper and gold in that order. Silver is also the most thermally conductive element, and the most light reflecting element. Silver also has the unusual property that the tarnish that forms on silver is still highly electrically conductive.
Copper is used extensively in electrical wiring and circuitry. Gold contacts are sometimes found in precision equipment for their ability to remain corrosion-free. Silver is used widely in mission-critical applications as electrical contacts, and is also used in photography (because silver nitrate reverts to metal on exposure to light), agriculture, medicine, audiophile and scientific applications.
Gold, silver, and copper are quite soft metals and so are easily damaged in daily use as coins. Precious metal may also be easily abraded and worn away through use. In their numismatic functions these metals must be alloyed with other metals to afford coins greater durability. The alloying with other metals makes the resulting coins harder, less likely to become deformed and more resistant to wear.
Gold coins: Gold coins are typically produced as either 90% gold (e.g. with pre-1933 US coins), or 22 carat (91.66%) gold (e.g. current collectible coins and Krugerrands), with copper and silver making up the remaining weight in each case. Bullion gold coins are being produced with up to 99.999% gold (in the Canadian Gold Maple Leaf series).
Silver coins: Silver coins are typically produced as either 90% silver – in the case of pre-1965 US minted coins (which were circulated in many countries), or sterling silver (92.5%) coins for pre-1920 British Commonwealth and other silver coinage, with copper making up the remaining weight in each case. Old European coins were commonly produced with 83.5% silver. Modern silver bullion coins are often produced with purity varying from 99.9% to 99.999%.
Copper coins: Copper coins are often of quite high purity, around 97%, and are usually alloyed with small amounts of zinc and tin.
Inflation has caused the face value of coins to fall below the hard currency value of the historically used metals. This had led to most modern coins being made of base metals – copper nickel (around 80:20, silver in color) is popular as are nickel-brass (copper (75), nickel (5) and zinc (20), gold in color), manganese-brass (copper, zinc, manganese, and nickel), bronze, or simple plated steel.
Biological role and toxicity
Copper, although toxic in excessive amounts, is essential for life. It can be found in hemocyanin, cytochrome c oxidase and in superoxide dismutase. Copper is shown to have antimicrobial properties which make it useful for hospital doorknobs to keep diseases from being spread. Eating food in copper containers is known to increase the risk of copper toxicity. Wilson's disease is a genetic condition in which a protein important for excretion of excess copper is mutated such that copper builds up in body tissues, causing symptoms including vomiting, weakness, tremors, anxiety, and muscle stiffness.
Elemental gold and silver have no known toxic effects or biological use, although gold salts can be toxic to liver and kidney tissue. Like copper, silver also has antimicrobial properties. The prolonged use of preparations containing gold or silver can also lead to the accumulation of these metals in body tissue; the results of which are irreversible but apparently harmless pigmentation conditions known as chrysiasis and argyria respectively.
Due to being short lived and radioactive, roentgenium has no biological use but it is likely extremely harmful due to its radioactivity.
References
Groups (periodic table)
Currency production | Group 11 element | [
"Chemistry"
] | 1,618 | [
"Periodic table",
"Groups (periodic table)"
] |
23,290,990 | https://en.wikipedia.org/wiki/Recursive%20language | In mathematics, logic and computer science, a formal language (a set of finite sequences of symbols taken from a fixed alphabet) is called recursive if it is a recursive subset of the set of all possible finite sequences over the alphabet of the language. Equivalently, a formal language is recursive if there exists a Turing machine that, when given a finite sequence of symbols as input, always halts and accepts it if it belongs to the language and halts and rejects it otherwise. In Theoretical computer science, such always-halting Turing machines are called total Turing machines or algorithms. Recursive languages are also called decidable.
The concept of decidability may be extended to other models of computation. For example, one may speak of languages decidable on a non-deterministic Turing machine. Therefore, whenever an ambiguity is possible, the synonym used for "recursive language" is Turing-decidable language, rather than simply decidable.
The class of all recursive languages is often called R, although this name is also used for the class RP.
This type of language was not defined in the Chomsky hierarchy. All recursive languages are also recursively enumerable. All regular, context-free and context-sensitive languages are recursive.
Definitions
There are two equivalent major definitions for the concept of a recursive language:
A recursive formal language is a recursive subset in the set of all possible words over the alphabet of the language.
A recursive language is a formal language for which there exists a Turing machine that, when presented with any finite input string, halts and accepts if the string is in the language, and halts and rejects otherwise. The Turing machine always halts: it is known as a decider and is said to decide the recursive language.
By the second definition, any decision problem can be shown to be decidable by exhibiting an algorithm for it that terminates on all inputs. An undecidable problem is a problem that is not decidable.
Examples
As noted above, every context-sensitive language is recursive. Thus, a simple example of a recursive language is the set L={abc, , , ...};
more formally, the set
is context-sensitive and therefore recursive.
Examples of decidable languages that are not context-sensitive are more difficult to describe. For one such example, some familiarity with mathematical logic is required: Presburger arithmetic is the first-order theory of the natural numbers with addition (but without multiplication). While the set of well-formed formulas in Presburger arithmetic is context-free, every deterministic Turing machine accepting the set of true statements in Presburger arithmetic has a worst-case runtime of at least , for some constant c>0. Here, n denotes the length of the given formula. Since every context-sensitive language can be accepted by a linear bounded automaton, and such an automaton can be simulated by a deterministic Turing machine with worst-case running time at most for some constant c , the set of valid formulas in Presburger arithmetic is not context-sensitive. On positive side, it is known that there is a deterministic Turing machine running in time at most triply exponential in n that decides the set of true formulas in Presburger arithmetic. Thus, this is an example of a language that is decidable but not context-sensitive.
Closure properties
Recursive languages are closed under the following operations. That is, if L and P are two recursive languages, then the following languages are recursive as well:
The Kleene star
The image φ(L) under an e-free homomorphism φ
The concatenation
The union
The intersection
The complement of
The set difference
The last property follows from the fact that the set difference can be expressed in terms of intersection and complement.
See also
Recursively enumerable language
Computable set
Recursion
References
Computability theory
Formal languages
Theory of computation
Recursion | Recursive language | [
"Mathematics"
] | 844 | [
"Computability theory",
"Formal languages",
"Mathematical logic",
"Recursion"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.