id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
1,095,311 | https://en.wikipedia.org/wiki/Pentagastrin | Pentagastrin (trade name Peptavlon) is a synthetic polypeptide that has effects like gastrin when given parenterally. It stimulates the secretion of gastric acid, pepsin, and intrinsic factor, and has been used as a diagnostic aid as the pentagastrin-stimulated calcitonin test.
Pentagastrin binds to the cholecystokinin-B receptor, which is expressed widely in the brain. Activation of these receptors activates the phospholipase C second messenger system. When given intravenously it may cause panic attacks.
Pentagastrin's IUPAC chemical name is "N-((1,1-dimethylethoxy)carbonyl)-beta-alanyl-L-tryptophyl-L-methionyl-L-alpha-aspartyl-L-phenylalaninamide".
Pentagastrin stimulation test
Pentagastrin is also used as a stimulation test to elevate of several hormones, such as serotonin. It provokes flushing and is useful in evaluating patients who describe flushing, but have normal or only marginally elevated biochemical markers for carcinoid syndrome.
It has been used to stimulate ectopic gastric mucosa for the detection of Meckels diverticulum by nuclear medicine.
Calcitonin test
The pentagastrin-stimulated calcitonin test is a diagnostic test for medullary carcinoma of the thyroid (MTC). MTC is a malignancy of the calcitonin-secreting cells of the thyroid gland, and thus MTC is commonly associated with an elevated calcitonin level, but an elevated level may not always be obvious. The pentagastrin-stimulated calcitonin test is useful in cases of suspected MTC that are not associated with elevated calcitonin. In these patients, injecting pentagastrin will cause calcitonin levels to rise significantly above the normal or basal range. After a total thyroidectomy for medullary thyroid carcinoma, the pentagastrin-stimulated calcitonin release can be used to detect residual parafollicular C-cells.
See also
CCK-4
References
Peptides
Blood tests
Cholecystokinin agonists
Tert-Butyl esters | Pentagastrin | [
"Chemistry"
] | 498 | [
"Blood tests",
"Biomolecules by chemical classification",
"Molecular biology",
"Chemical pathology",
"Peptides"
] |
1,095,918 | https://en.wikipedia.org/wiki/Extractive%20distillation | Extractive distillation is defined as distillation in the presence of a miscible, high-boiling, relatively non-volatile component, the solvent, that forms no azeotrope with the other components in the mixture. The method is used for mixtures having a low value of relative volatility, nearing unity. Such mixtures cannot be separated by simple distillation, because the volatility of the two components in the mixture is nearly the same, causing them to evaporate at nearly the same temperature at a similar rate, making normal distillation impractical.
The method of extractive distillation uses a separation solvent, which is generally non-volatile, has a high boiling point and is miscible with the mixture, but doesn't form an azeotropic mixture. The solvent interacts differently with the components of the mixture thereby causing their relative volatilities to change. This enables the new three-part mixture to be separated by normal distillation. The original component with the greatest volatility separates out as the top product. The bottom product consists of a mixture of the solvent and the other component, which can again be separated easily because the solvent does not form an azeotrope with it. The bottom product can be separated by any of the methods available.
It is important to select a suitable separation solvent for this type of distillation. The solvent must alter the relative volatility by a wide enough margin for a successful result. The quantity, cost and availability of the solvent should be considered. The solvent should be easily separable from the bottom product, and should not react chemically with the components or the mixture, or cause corrosion in the equipment. A classic example to be cited here is the separation of an azeotropic mixture of benzene and cyclohexane, where aniline is one suitable solvent.
See also
Batch distillation
Heteroazeotrope
Theoretical plate
References
External links
Extractive Distillation
Distillation | Extractive distillation | [
"Chemistry"
] | 415 | [
"Distillation",
"Separation processes"
] |
16,607,009 | https://en.wikipedia.org/wiki/Airwatt | Airwatt or air watt is a unit of measurement that represents the true suction power of vacuum cleaners. It is calculated by multiplying the airflow (in cubic metres per second) by the suction pressure (in pascals). This measurement reflects the energy per unit time of the air flowing through the vacuum's opening, which correlates to the electrical energy (wattage) supplied through the power cable.
The airwatt is a valuable measurement of vacuum cleaner motor efficiency because it represents the power carried by the fluid flow (in the case of a typical household vacuum, this fluid is air). The power of the airflow is equal to the product of pressure and volumetric flow rate. Unlike electrical power (measured in watts), which includes energy lost due to inefficiencies, the airwatt directly reflects the actual airflow and suction power. Therefore, two vacuum cleaners with the same airwattage will have essentially the same suction, whereas devices with the same electrical wattage might vary significantly in efficiency, resulting in different airwattage levels.
Definition
The "power in airwatts" (meaning: effective power in watts) is calculated as the product of suction pressure and the air flow rate:
Where is the power in airwatts, is the suction pressure in pascals, and is the air flow rate in cubic metres per second:
Equivalently, in SI base units:
An alternative airwattage formula is from ASTM International (see document ASTM F558 - 13)
Where P is the power in airwatts, F is the rate per minute (denoted cu ft/min or CFM) and S is the suction capacity expressed as a pressure in units of inches of water.
Some manufacturers choose to use the fraction rather than the ASTM decimal, leading to a less than 0.25% variation in their calculations.
Where airflow in Cubic Feet per Minute [CFM] is calculated using
airflow = / vacuum
Where D is the diameter of the orifices.
CFM is always given statistically at its maximum which is at a opening. Waterlift, on the other hand, is always given at its maximum: a 0-inch opening. When waterlift is at a 0-inch opening, then the flow rate is zero – no air is moving, thus the power is also 0 airwatts. So one then needs to analyse the curve created by both flow rate and waterlift as the opening changes from ; somewhere along this line the power will attain its maximum.
If the flow rate were given in litres per second (L/s), then the pressure would be in kilopascals (kPa). Thus one watt equals one kilopascal times one litre per second:
The ratio between the Airwatt rating (power produced in the flow) and electrical watts (power from voltage and current) is the efficiency of the vacuum.
Ratings recommendations
Hoover recommends 100 airwatts for upright vacuum cleaners and 220 airwatts for "cylinder" (canister) vacuum cleaners.
References
Units of power | Airwatt | [
"Physics",
"Mathematics"
] | 638 | [
"Physical quantities",
"Quantity",
"Power (physics)",
"Units of power",
"Units of measurement"
] |
16,617,601 | https://en.wikipedia.org/wiki/Ishimori%20equation | The Ishimori equation is a partial differential equation proposed by the Japanese mathematician . Its interest is as the first example of a nonlinear spin-one field model in the plane that is integrable .
Equation
The Ishimori equation has the form
Lax representation
The Lax representation
of the equation is given by
Here
the are the Pauli matrices and is the identity matrix.
Reductions
The Ishimori equation admits an important reduction:
in 1+1 dimensions it reduces to the continuous classical Heisenberg ferromagnet equation (CCHFE). The CCHFE is integrable.
Equivalent counterpart
The equivalent counterpart of the Ishimori equation is the Davey-Stewartson equation.
See also
Nonlinear Schrödinger equation
Heisenberg model (classical)
Spin wave
Landau–Lifshitz model
Soliton
Vortex
Nonlinear systems
Davey–Stewartson equation
References
External links
Ishimori_system at the dispersive equations wiki
Electric and magnetic fields in matter
Partial differential equations
Integrable systems | Ishimori equation | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 204 | [
"Materials science stubs",
"Integrable systems",
"Theoretical physics",
"Electric and magnetic fields in matter",
"Materials science",
"Condensed matter physics",
"Condensed matter stubs",
"Electromagnetism stubs"
] |
16,618,335 | https://en.wikipedia.org/wiki/Polysialic%20acid | Polysialic acid is an unusual posttranslational modification that occurs on neural cell adhesion molecules (NCAM). Polysialic acid is considerably anionic. This strong negative charge gives this modification the ability to change the protein's surface charge and binding ability. In the synapse, polysialation of NCAM prevents its ability to bind to NCAMs on the adjacent membrane.
Structure
Polysialic acid (polySia) is polymer of linearly repeating monomer units of α2,8- and α2,9-glycosidic linked sialic acid residues. Sialic acid refers to carboxylated 9-carbon sugars, 2-keto-3-dexoxy-D-glycero-nononic acids. An unusual property of this sugar is that it often polymerizes into polySia. This is accomplished by attaching the monomers to the nonreducing end of the glycan. This mostly consists of Neu5Ac subunits. It is polyanionic and bulky, meaning there is little ability to reach its central molecules. polySia is useful in signaling in vertebrates and on the cell surface of few glycoproteins and glycolipids causing modifications, and it has been recently found that the function of polySia relates almost directly to its degree of polymerization. The number of units can range from 8 to greater than 400. This vast range causes differences in the polySia's ability to adhere different cells, assist in cellular migration, synapse formation, and regulate adhesion in nerve cells by modeling and formating them. polySia's most prominent role is in post-translational modifications in a few proteins, with the main one being NCAM. polySia links to adhesion molecules causing their adhesive properties to be subdued allowing for the detailed control of cell migration and cell to cell relations. This is caused by polySia's bulky and polyanionic properties.
The human body produces polySia naturally and attaches it to a various number of proteins. This is done by linking polySia on the α2,3- or α2,6- terminal of the glycoprotein. O-linked glycosylation through threonine or N-linked glycosylation through asparagine is employed. This polySia linkage is found in proteins such as NCAM, E-selectin ligand 1 (ESL-1), C–C chemokine receptor type 7 (CCR7), synaptic cell adhesion molecule-1 (SynCAM-1), neuropilin-2 (NRP-2), the CD36 scavenger receptor found in the milk of humans, and the α-subunit of the voltage-sensitive sodium channel. The synthesis of polySia is enzymatically formed by α2,8-sialyltransferase (ST8Sia) in a Type II transmembrane protein located on the Golgi Apparatus membrane. ST8Sia does this by adding sialic acids to the terminal end of the glycan through the CMP-sialic acid donor at various lengths depending on necessity. The length is controlled extensively by the expression of polysialyltransferase enzymes, once again controlling the function of polySia.
Discovery and methods of detection
polySia was discovered in E. coli K-235 by Barry and Goebel in 1957. E. coli is an encapsulated, gram-negative bacteria in which Barry and Goebel studied, pinpointing polySia, which they called colominic acid. Following this discovery, multiple other bacterial capsules abundant in glycans were found to contain polySia. This included Neisseia meningitidis serogroups B and C in 1975. This was done by the use of a horse anti-polySia polyclonal antibody, being one of the first effective immunochemical probes. This was revolutionary as the anti-polySia antibodies were used to find polySia on proteins and cells. Mannheimia haemolytica A2, Moraxella nonliquifaciens, and E. coli K92 were found in 2013. Due to the capsule containing polySia, many scientists have tried to generate vaccines for these specific bacteria, notoriously difficult to target. However, their successes have been numbered as α2,8-polySia is naturally produced by humans. Another issue is that polySia found in bacteria does not produce a solid or consistent immune response.
Another method of polySia detection relies on molecular labeling with fluorescence. This process, started in 1998, involves exposing α2→8-linked N-acylneuraminic acid (Neu5Acyl) to periodate oxidation causing the terminals to be oxidized and in between untouched. If C9 compounds are observed after this exposure it indicates the presence of polySia. The way these can be numbered is by anion exchange chromatography after periodate oxidation with the label 1,2-diamino-4,5-methylenedioxybenzene (DMB) on C7 and C9. It is known that there are many different structures of polySia and these were difficult to recognize and detect until this fluorescent labeling, making it very advantageous.
Function in humans
polySia is involved in many natural human functions. The major examples include membranes, neuron signaling, the immune system, neutrophil extracellular trap formation, and macrophage and microglia function. First, polySia makes membrane modifications due to interactions with a variety of factors. These could include repulsive forces between the polyanionic polySia and the mostly negatively charged glycocalyx. Because of these interactions the membrane is edited in its ability to interact with other cells, its surface charge distribution, inter-membrane interaction, pH, and membrane potential. Hydration and charge were noted before and after removing polySia from a membrane and a 25% decrease in the distance between cells was observed. This is due to the anti-adhesive properties of polySia. polySia does not only have repulsive interactions, as there are positive charge molecules located in lipid rafts, such as NCAM. The interaction between polySia and NCAM greatly affects NCAM's signaling ability as its composition is altered when they meet. Other forms of neuron signaling polySia is involved in include brain-derived neurotrophic factor (BDNF) and fibroblast growth factor 2 (FGF2). With nearly the same mechanism, the act of polysialylation causes BDNF or FGF2 complexes through electrostatic interactions. This allows for the binding of polySia and these complexes causing polySia to be a reservoir. polySia then regulates the concentration of neurotrophins. Because they are not allowed to diffuse, signaling is more efficient. polySia is also found on immune cell surfaces. Some of the proteins are known, but many are not and the mechanisms are still being studied. However, it is known that polySia is in regulatory functions in the immune system leading to protection from invaders and response to damaged tissue. polySia is involved in NETosis which is a reactionary function of the body in the presence of foreign invaders. It is the intentional death of neutrophils. polySia ensures that this targeted cell death does not kill cells that are healthy and unaffected, as well as containing antimicrobial attributes. This is done by polySia by binding to lactoferrin, another antimicrobial molecule, surrounding neutrophils. polySia binding causes a tighter shell of lactoferrin around the cell membrane. polySia binds with Siglec-11 allowing for the regulation of microglia through exosomes. This shows that polySia binding with Siglec-11 causes a delay in neurodegeneration and control of neuroinflammation. polySia also limits inflammation in macrophages. polySia was found to have limited the expression of tumour necrosis factor (TNF).
References
Molecular biology | Polysialic acid | [
"Chemistry",
"Biology"
] | 1,684 | [
"Biochemistry",
"Molecular biology"
] |
16,621,593 | https://en.wikipedia.org/wiki/Hexafluoropropylene%20oxide | Hexafluoropropylene oxide (HFPO) is an intermediate used in industrial organofluorine chemistry; specifically it is a monomer for fluoropolymers. This colourless gas is the epoxide of hexafluoropropylene, which is a fluorinated analog of propylene oxide, HFPO is produced by Chemours and 3M and as a precursor to the lubricant Krytox and related materials. It is generated by oxidation of perfluoropropylene, e.g. with oxygen as well as other oxidants.
Reactivity
Fluoride catalyzes the formation of perfluorinated polyethers such as Krytox. The initial step entails nucleophilic attack at the middle carbon to give the perfluoropropoxide anion, which in turn attacks another monomer. This process generates a polymer terminated by an acyl fluoride, which is hydrolyzed to the carboxylic acid which is decarboxylated with fluorine. The net polymerization reaction can be represented as:
n +2 CF3CFCF2O → CF3CF2CF2O(CF(CF3)CF2O)nCF2CF3 + CO
Upon heating above 150 °C, HFPO decomposes to trifluoroacetyl fluoride and difluorocarbene:
CF3CFCF2O → CF3C(O)F + CF2
The epoxide of tetrafluoroethylene is even more unstable with respect to trifluoroacetyl fluoride.
In the presence of Lewis acids the compound rearranges to hexafluoroacetone, another important chemical intermediate. This rearrangement can be of concern during storage as the rearrangement be catalyzed by the material of the storage cylinder's walls and leads to unwanted formation of HFA during storage. As a result of this, 3M recommends using all HFPO shipped in carbon-steel containers within 90 days of shipping.
Methanolysis affords methyl trifluoropyruvate, a reagent useful in organic synthesis:
CF3CFCF2O + 2 MeOH → CF3C(O)CO2Me + MeF + 2 HF
References
External links
HFPO bulletin from Chemours
Dyneon HFPO from 3M
Trifluoromethyl compounds
Epoxides
Perfluorinated compounds
Monomers | Hexafluoropropylene oxide | [
"Chemistry",
"Materials_science"
] | 539 | [
"Monomers",
"Polymer chemistry"
] |
16,623,093 | https://en.wikipedia.org/wiki/Diameter%20tape | A diameter tape (D-tape) is a measuring tape used to estimate the diameter of a cylinder object, typically the stem of a tree or pipe. A diameter tape has either metric or imperial measurements reduced by the value of π. This means the tape measures the diameter of the object. It is assumed that the cylinder object is a perfect circle. The diameter tape provides an approximation of diameter; most commonly used in dendrometry.
Diameter tapes are usually made of cloth or metal, and on one side of the tape have diameter measurements and on the other standard measurements (not reduced by π).
Use of diameter tapes
Diameter tapes are used to measure tree stems (trunk or bole); other parts of trees such as branches and roots; and logs (cut stems).
Standard Diameter Height (SDH) is the height at which tree diameter is measured, and is normally called diameter at breast height (DBH).
DBH is measured at a fixed height of above the ground in the United States, New Zealand, South Africa, India, and Malaysia; or meters in Australia, Canada, Europe Thailand and Vietnam.
DBH is not usually measured at ground level to avoid measuring a tree's butt swell. Butt swell is where the base of the tree is unconventionally thicker than the rest of the tree. Height and diameter are used to determine the volume of a given tree; measuring above the butt swell is required to provide the most accurate measurement.
For single-stem trees, DBH is a useful and is easily measured. For multi-stem species such as mallee or other multi-stemmed species, DBH may be inconvenient because of a large number of stems and difficulty of access, and a lower or higher measurement height may be more practical.
DBH measurements can be used with other measurements, such as height, to estimate the volume of wood in an individual tree. Usually DBH and height of the trees in a plot or quadrat is measured and used to estimate wood volume for the plot, which can be in turn used to estimate the wood volume in a larger area.
DBH measures can be used to calculate basal area.
Measuring diameter
To measure the diameter of a tree, the diameter tape (diameter side facing user) is wrapped around the tree, in the plane perpendicular to the axis of the trunk at above ground (or , depending on the location) . Where the number "0" aligns with the rest of the tape, the diameter can be read directly from the tape, for a relatively round and smooth tree trunk as shown.
See Tree girth measurement for some cautions about placement, and errors that may be introduced, by reporting this diameter for a trunk, or any material, that is not nearly circular.
Precision diameter tapes
Precision diameter tapes are used for measuring the true diameters of both round and out-of-round forms. Used in the metal working industry, these tapes are precision tools made of 1095 clock spring steel.
The precision-diameter gages consist of a narrow metal ribbon bearing special graduations vernier scale. The vernier scale allows the user more accurately measure diameter.
The tapes are checked over master gauges at accuracy for standard tapes up to .
Precision diameter tapes are used in precision or advanced manufacturing industries, such as aero-space field and the plastic pipe industry.
See also
Biltmore stick
Timber cruise
Tree girth measurement
Tree allometry
References
Forest modelling
Dimensional instruments
Forestry tools | Diameter tape | [
"Physics",
"Mathematics"
] | 697 | [
"Quantity",
"Dimensional instruments",
"Physical quantities",
"Size"
] |
13,825,072 | https://en.wikipedia.org/wiki/Keel%20effect | In aeronautics, the keel effect (also known as the pendulum effect or pendulum stability) is the result of the sideforce-generating surfaces being above or below the center of gravity of the aircraft. Along with dihedral, sweepback, and weight distribution, keel effect is one of the four main design considerations in aircraft lateral stability.
Mechanism
Examples of sideforce-generating surfaces are the vertical stabilizer, rudder, and parts of the fuselage. When an aircraft is in a sideslip, these surfaces generate sidewards lift forces. If the surface is above or below the center of gravity, the sidewards forces generate a rolling moment. This rolling moment caused by sideslip is dihedral effect. Keel effect is the contribution of these side forces to rolling moment as sideslip increases. Sideforce-producing surfaces above the center of gravity will increase dihedral effect, while sideforce-producing surfaces below the center of gravity will decrease dihedral effect.
Increased dihedral effect (helped or hindered by keel effect) results in a greater tendency for the aircraft to return to level flight after the aircraft is put into a bank. It reduces the tendency to diverge to a greater bank angle when the aircraft starts wings-level.
Keel effect is also called pendulum effect because a lower center of gravity increases the effect of sideways forces (above the center of gravity) in producing a rolling moment. This is because the moment arm is longer, not because of gravitational forces. A low center of gravity is like a pendulum.
The effect is an important consideration in seaplane design where pontoon floats generate strong sideforces with a long moment arm.
References
Illman, Paul; The Pilot's Handbook of Aeronautical Knowledge; Fig 2.34
Aerodynamics | Keel effect | [
"Chemistry",
"Engineering"
] | 350 | [
"Aerospace engineering",
"Aerodynamics",
"Fluid dynamics"
] |
13,829,544 | https://en.wikipedia.org/wiki/First-dose%20phenomenon | The first-dose phenomenon is a sudden and severe fall in blood pressure that can occur when changing from a lying to a standing position the first time that an alpha blocker drug is used or when resuming the drug after many months off. This postural hypotension usually happens shortly after the first dose is absorbed into the blood and can result in syncope (fainting). Syncope occurs in approximately 1% of patients given an initial dose of 2 mg prazosin or greater. This adverse effect is self-limiting and in most cases does not recur after the initial period of therapy or during subsequent dose titration.
The alpha blocker prazosin (Minipress) is most notorious for producing a first dose phenomenon. Other drugs of the same family, doxazosin (Cardura) and terazosin (Hytrin), can also cause this phenomenon, though less frequently.
The cause is not clear. It occurs more commonly in patients who are salt and fluid volume depleted (as happens due to the use of diuretics), or were using beta blockers. Diuretics and beta blockers are frequently used to control hypertension. For this reason, treatment with prazosin (Minipress) should always be initiated with a low dose and should be taken at bedtime to avoid standing position.
Other drug classes with observed first dose hypotension
This effect is also observed after the administration of the first dose of drugs in the ACEi class (angiotensin-converting enzyme inhibitor). This may occur with the class's better known side effect of dry cough (due to decreased breakdown of bradykinin), though there is no clear relationship between the two side effects.
The first dose phenomenon in ACEi is reduced and made safer by avoiding diuretics for 24 hours prior to first dose, taking first dose at night (so avoiding falls, etc) and starting on low doses and titrating upwards.
See also
First pass effect
References
Clinical pharmacology | First-dose phenomenon | [
"Chemistry"
] | 418 | [
"Pharmacology",
"Clinical pharmacology"
] |
13,830,115 | https://en.wikipedia.org/wiki/Jump-and-Walk%20algorithm | Jump-and-Walk is an algorithm for point location in triangulations (though most of the theoretical analysis were performed in 2D and 3D random Delaunay triangulations). Surprisingly, the algorithm does not need any preprocessing or complex data structures except some simple representation of the triangulation itself. The predecessor of Jump-and-Walk was due to Lawson (1977) and Green and Sibson (1978), which picks a random starting point S and then walks from S toward the query point Q one triangle at a time. But no theoretical analysis was known for these predecessors until after mid-1990s.
Jump-and-Walk picks a small group of sample points and starts the walk from the sample point which is the closest to Q until the simplex containing Q is found. The algorithm was a folklore in practice for some time, and the formal presentation of the algorithm and the analysis of its performance on 2D random Delaunay triangulation was done by Devroye, Mucke and Zhu in mid-1990s (the paper appeared in Algorithmica, 1998). The analysis on 3D random Delaunay triangulation was done by Mucke, Saias and Zhu (ACM Symposium of Computational Geometry, 1996). In both cases, a boundary condition was assumed, namely, Q must be slightly away from the boundary of the convex domain where the vertices of the random Delaunay triangulation are drawn. In 2004, Devroye, Lemaire and Moreau showed that in 2D the boundary condition can be withdrawn (the paper appeared in Computational Geometry: Theory and Applications, 2004).
Jump-and-Walk has been used in many famous software packages, e.g., QHULL, Triangle and CGAL.
References
.
.
.
.
.
Triangulation (geometry)
Algorithms | Jump-and-Walk algorithm | [
"Mathematics"
] | 374 | [
"Triangulation (geometry)",
"Applied mathematics",
"Algorithms",
"Mathematical logic",
"Planar graphs",
"Planes (geometry)"
] |
13,830,874 | https://en.wikipedia.org/wiki/Boris%20Chertok | Boris Yevseyevich Chertok (; – 14 December 2011) was a Russian engineer in the former Soviet space program, mainly working in control systems, and later found employment in Roscosmos.
Major responsibility under his guidance was primarily based on computerized control system of the Russian missiles and rocketry system, and authored the four-volume book Rockets and People– the definitive source of information about the history of the Soviet space program.
From 1974, he was the deputy chief designer of the Korolev design bureau, the space aircraft designer bureau which he started working for in 1946. He retired in 1992.
Personal life
Born in Łódź (modern Poland), his family moved to Moscow when he was aged 3. Starting from 1930, he worked as an electrician in a metropolitan suburb. Since 1934, he was already designing military aircraft in Bolkhovitinov design bureau. In 1946, he entered the rocket-pioneering NII-88 as a head of control systems department, working along with Sergei Korolev, whose deputy he became after OKB-1 spun off from the NII-88 in 1956.
He was married to Yekaterina Semyonovna Golubkina. He was an atheist.
Rockets and People
Between 1994 and 1999 Boris Chertok, with support from his wife Yekaterina Golubkina, created the four-volume book series about the history of the Soviet space industry. The series was originally published in Russian, in 1999.
Черток Б.Е. Ракеты и люди — М.: Машиностроение, 1999. (B. Chertok, Rockets and People)
Черток Б.Е. Ракеты и люди. Фили — Подлипки — Тюратам — М.: Машиностроение, 1999. (B. Chertok, Rockets and People. Fili — Podlipki — Tyuratam)
Черток Б.Е. Ракеты и люди. Горячие дни холодной войны — М.: Машиностроение, 1999. (B. Chertok, Rockets and People. Hot Days of the Cold War)
Черток Б.Е. Ракеты и люди. Лунная гонка — М.: Машиностроение, 1999. (B. Chertok, Rockets and People. The Moon Race)
Translation into English
NASA's History Division published four translated and somewhat edited volumes of the series between 2005 and 2011. The series editor was Asif Siddiqi, the author of Challenge to Apollo: The Soviet Union and the Space Race, 1945-1974. Chertok dedicated this series to his wife.
Boris Chertok (author). Rockets and People, Volume 1, 2005. . Published by NASA.
Boris Chertok (author). Rockets and People, Volume 2: Creating a Rocket Industry, 2006. . Published by NASA.
Boris Chertok (author). Rockets and People, Volume 3: Hot Days of the Cold War, 2009. . Published by NASA.
Boris Chertok (author). Rockets and People, Volume 4: The Moon Race, 2011. Published by NASA.
Honours and awards
Hero of Socialist Labour (1961)
Order of Merit for the Fatherland, 4th class (1996)
Two Orders of Lenin (1956, 1961)
Order of the October Revolution (1971)
Order of the Red Banner of Labour (1975)
Order of the Red Star (1945)
Medal "For Merit in Space Exploration" (12 April 2011) - for the great achievements in research, development and utilization of outer space, many years of diligent work, public activities
Gold Medal BN Petrov Academy of Sciences (1992)
Gold Medal named after SP Korolev, RAS (2008)
Lenin Prize (1957) - for participation in creating the first artificial satellites
USSR State Prize (1976) - for participation in the project "Soyuz-Apollo"
International Prize of St Andrew "For Faith and Loyalty" (2010)
Asteroid 6358 Chertok was named after him
Corresponding Member of the USSR Academy of Sciences (1968) of the Department of Mechanics and Control Processes
Member of the Russian Academy of Sciences (2000)
Member of the International Academy of Astronautics (1990)
Honorary Member of Russian Academy of Astronautics
Member of the International Academy of Informatization
Jubilee Medal "In Commemoration of the 100th Anniversary since the Birth of Vladimir Il'ich Lenin"
Medal "For the Defence of Moscow"
Medal "For the Victory over Germany in the Great Patriotic War 1941–1945"
Jubilee Medal "Thirty Years of Victory in the Great Patriotic War 1941-1945"
Jubilee Medal "Forty Years of Victory in the Great Patriotic War 1941-1945"
Medal "For Valiant Labour in the Great Patriotic War 1941-1945"
Jubilee Medal "60 Years of Victory in the Great Patriotic War 1941-1945"
Medal "Veteran of Labour"
Jubilee Medal "50 Years of the Armed Forces of the USSR"
Jubilee Medal "60 Years of the Armed Forces of the USSR"
Medal "In Commemoration of the 800th Anniversary of Moscow"
Medal "In Commemoration of the 850th Anniversary of Moscow"
Medal "In Commemoration of the 1500th Anniversary of Kiev"
Jubilee Medal "300 Years of the Russian Navy"
See also
Institute Rabe
References
Literature
Vladimir Branets, Boris Evseyevich Chertok (to 95th birthday) ;
"Testing of rocket and space technology - the business of my life" Events and facts - A.I. Ostashev, Korolyov, 2001.;
A.I. Ostashev, Sergey Pavlovich Korolyov - The Genius of the 20th Century — 2010 M. of Public Educational Institution of Higher Professional Training MGUL ;
«A breakthrough in space» - Konstantin Vasilyevich Gerchik, M: LLC "Veles", 1994, - ;
"Look back and look ahead. Notes of a military engineer" - Rjazhsky A. A., 2004, SC. first, the publishing house of the "Heroes of the Fatherland" ;
"Rocket and space feat Baikonur" - Vladimir Порошков, the "Patriot" publishers 2007. ;
"Unknown Baikonur" - edited by B. I. Posysaeva, M.: "globe", 2001. ;
"People duty and honor" – A. A. Shmelev, the second book. M: Editorial Board "Moscow journal", 1998.
"Bank of the Universe" - edited by Boltenko A. C., Kyiv, 2014., publishing house "Phoenix",
"S. P. Korolev. Encyclopedia of life and creativity" - edited by C. A. Lopota, RSC Energia. S. P. Korolev, 2014
"Space science city Korolev" - Author: Posamentir R. D. M: publisher SP Struchenevsky O. V.,
"History in faces and destiniesv" – Author: Posamentir R. D. M: publisher SP Struchenevsky O. V.,
"I look back and have no regrets. " - Author: Abramov, Anatoly Petrovich: publisher "New format" Barnaul, 2022.
External links
Rockets and People in English, published by NASA History
Boris Chertok. Rockets and People, 2005. . NASA.
Boris Chertok. Rockets and People, Volume 2: Creating a Rocket Industry, 2006. . NASA.
Boris Chertok. Rockets and People, Volume 3: Hot Days of the Cold War, 2009. . NASA.
Boris Chertok. Rockets and People, Volume 4: The Moon Race, 2011. NASA.
Boris Chertok at Astronautix.com
Rockets and People
Boris Chertok family history
1912 births
2011 deaths
Russian atheists
Control theorists
Russian electrical engineers
Russian aerospace engineers
Soviet engineers
20th-century Russian engineers
Early spaceflight scientists
Academic staff of the Moscow Institute of Physics and Technology
Heroes of Socialist Labour
Recipients of the Order of Lenin
Recipients of the Medal "For Merit in Space Exploration"
Recipients of the Lenin Prize
Recipients of the USSR State Prize
Corresponding Members of the USSR Academy of Sciences
Full Members of the Russian Academy of Sciences
Russian inventors
Rocket scientists
Soviet space program personnel
Soviet spaceflight pioneers
Employees of RSC Energia
Soviet electrical engineers | Boris Chertok | [
"Engineering"
] | 1,795 | [
"Control engineering",
"Control theorists"
] |
4,360,868 | https://en.wikipedia.org/wiki/Common%20Information%20Model%20%28electricity%29 | The Common Information Model (CIM) is an electric power transmission and distribution standard developed by the electric power industry. It aims to allow application software to exchange information about an electrical network. It has been officially adopted by the International Electrotechnical Commission (IEC).
The CIM is currently maintained as a UML model. It defines a common vocabulary and basic ontology. CIM models the network itself using the 'wires model'. It describes the basic components used to transport electricity. Measurements of power are modeled by another class. These measurements support the management of power flow at the transmission level, and by extension, the modeling of power through a revenue meter on the distribution network. The CIM can be used to derive 'design artifacts' (e.g. XML or RDF Schemas) as needed for the integration of related application software.
CIM is also used to derive messages for the wholesale energy market with the framework for energy market communications, IEC 62325. The European style market profile is a profile derivation based on the CIM, intending to harmonize energy market data exchanges in Europe. ENTSO-E is a major contributor to the European style market profile.
The core packages of the CIM are defined in IEC 61970-301, with a focus on the needs of electricity transmission, where related applications include energy management system, SCADA, and planning and optimization. IEC 61970-501 and 61970-452 define an XML format for network model exchanges using RDF. The IEC 61968 series of standards extend CIM to meet the needs of electrical distribution, where related applications include distribution management system, outage management system, planning, metering, work management, geographic information system, asset management, customer information systems and enterprise resource planning.
CIM vs SCL
CIM and Substation Configuration Language (SCL) are developed in parallel under different IEC TC 57 working groups. Though both have the ability to exchange model and configuration information between different equipment or tools and use XML for storage, many differences separate the standards:
CIM is based on UML, using inheritance. SCL representation is sequential or hierarchical.
Although CIM is not limited to modeling equipment, CIM emphasises inheritance and its interconnection, whereas SCL starts from a functional point of view.
CIM is broadly applied to enterprise integration and related information exchanges between systems including EMS, DMS, Planning, Energy Markets and while SCL is limited to the exchange of data within substation equipment and tools.
Harmonization
Applications may use these standards to improve interoperability and data exchange by transforming SCL models into CIM models. Without harmonization, system and application development and implementation require engineering and design that applies to only one implementation. Harmonization can be done by mixing the equipment topological approach of CIM and the functionality approach of SCL. IEC TC 57 WG19 is involved in the harmonization CIM & SCL. This involves:
Mapping of logical nodes of IEC 61850 (SCL) to equipment defined in CIM.
Use Web Ontology Language to define the mapping patterns for the areas to which the automatic mapping cannot be performed.
The complete approach should not modify the existing models to a large extent.
See also
CIM Profile
Substation Configuration Language (SCL)
IEC 61970
IEC 61968
IEC 62325
IEC 61850
MultiSpeak
References
External links
CIM Users Group
An IBM Whitepaper | Design a message and service definition integration strategy based on Common Information Model standards
A whitepaper | Utilities Enterprise information management strategies
"Overcoming Challenges Using the CIM as a Semantic Model for Energy Applications"
"A Brief History: The Common Information Model"
Electric power | Common Information Model (electricity) | [
"Physics",
"Engineering"
] | 756 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
4,361,011 | https://en.wikipedia.org/wiki/Kr%C3%B6ger%E2%80%93Vink%20notation | Kröger–Vink notation is a set of conventions that are used to describe electric charges and lattice positions of point defect species in crystals. It is primarily used for ionic crystals and is particularly useful for describing various defect reactions. It was proposed by and .
Notation
The notation follows the scheme:
M
M corresponds to the species. These can be
atoms – e.g., Si, Ni, O, Cl
vacancies – V or v (since V is also the symbol for vanadium)
interstitials – i (although this is usually used to describe lattice site, not species)
electrons – e
electron holes – h
S indicates the lattice site that the species occupies. For instance, Ni might occupy a Cu site. In this case, M would be replaced by Ni and S would be replaced by Cu. The site may also be a lattice interstice, in this case, the symbol "i" is used. A cation site can be represented by the symbols C or M (for metal), and an anion site can be represented by either an A or X.
C corresponds to the electronic charge of the species relative to the site that it occupies. The charge of the species is calculated by the charge on the current site minus the charge on the original site. To continue the previous example, Ni often has the same valency as Cu, so the relative charge is zero. To indicate a null charge, × is used. A single • indicates a net single positive charge, while two would represent two net positive charges. Finally, signifies a net single negative charge, so two would indicate a net double negative charge.
Examples
Al — an aluminum ion sitting on an aluminum lattice site, with a neutral charge.
Ni — a nickel ion sitting on a copper lattice site, with neutral charge.
v — a chlorine vacancy, with single positive charge.
Ca — a calcium interstitial ion, with double positive charge.
Cl — a chlorine anion on an interstitial site, with single negative charge.
O — an oxygen anion on an interstitial site, with double negative charge.
e — an electron. No site is normally specified.
Procedure
When using Kröger–Vink notation for both intrinsic and extrinsic defects, it is imperative to keep all masses, sites, and charges balanced in each reaction. If any piece is unbalanced, the reactants and the products do not equal the same entity and therefore all quantities are not conserved as they should be. The first step in this process is determining the correct type of defect and reaction that comes along with it; Schottky and Frenkel defects begin with a null reactant (∅) and produce either cation and anion vacancies (Schottky) or cation/anion vacancies and interstitials (Frenkel). Otherwise, a compound is broken down into its respective cation and anion parts for the process to begin on each lattice. From here, depending on the required steps for the desired outcome, several possibilities occur. For example, the defect may result in an ion on its own ion site or a vacancy on the cation site. To complete the reactions, the proper number of each ion must be present (mass balance), an equal number of sites must exist (site balance), and the sums of the charges of the reactants and products must also be equal (charge balance).
Example usage
∅ v + 2 v
Schottky defect formation in TiO2.
∅ v + v + 3 v
Schottky defect formation in BaTiO3.
Mg + O O + v + Mg
Frenkel defect formation in MgO.
Mg + O v + v + Mg + O
Schottky defect formation in MgO.
Basic types of defect reactions
Assume that the cation C has +1 charge and anion A has −1 charge.
Schottky defect – forming a vacancy pair on both anion and cation sites:
∅ v + v v + v
Schottky defect (charged) – forming an electron–hole pair:
∅ e + h
Frenkel defect – forming an interstitial and vacancy pair on an anion or cation site:
∅ v + C v + M (cationic Frenkel defect)
∅ v + A v + X (anionic Frenkel defect)
Associates – forming an entropically favored site, usually depending on temperature. For the two equations shown below, the right side is usually at high temperature as this allows for more movement of electrons. The left side is usually at low temperature as the electrons lose their mobility due to loss in kinetic energy.
M + e → M (metal site reduced)
B → B + e (metal site oxidized, where B is an arbitrary cation having one more positive charge than the original atom on the site)
Oxidation–reduction tree
The following oxidation–reduction tree for a simple ionic compound, AX, where A is a cation and X is an anion, summarizes the various ways in which intrinsic defects can form. Depending on the cation-to-anion ratio, the species can either be reduced and therefore classified as n-type, or if the converse is true, the ionic species is classified as p-type. Below, the tree is shown for a further explanation of the pathways and results of each breakdown of the substance.
Schematic examples
From the chart above, there are total of four possible chemical reactions using Kröger–Vink Notation depending on the intrinsic deficiency of atoms within the material. Assume the chemical composition is AX, with A being the cation and X being the anion. (The following assumes that X is a diatomic gas such as oxygen and therefore cation A has a +2 charge. Note that materials with this defect structure are often used in oxygen sensors.)
In the reduced n-type, there are excess cations on the interstitial sites:
A + X A + X2(g) + 2 e
In the reduced n-type, there is a deficiency of anions on the lattice sites:
A(s) A + v + 2 e
In the oxidized p-type, there is cation deficiency on the lattice sites:
X2(g) v + X + 2 h
In the oxidized p-type, there are excess anions on interstitial sites:
A + X A(s) + X + 2 h
Relating chemical reactions to the equilibrium constant
Using the law of mass action, a defect's concentration can be related to its Gibbs free energy of formation, and the energy terms (enthalpy of formation) can be calculated given the defect concentration or vice versa.
Examples
For a Schottky reaction in MgO, the Kröger–Vink defect reaction can be written as follows:
Note that the vacancy on the Mg sublattice site has a −2 effective charge, and the vacancy on the oxygen sublattice site has a +2 effective charge. Using the law of mass action, the reaction equilibrium constant can be written as (square brackets indicating concentration):
Based on the above reaction, the stoichiometric relation is as follows:
Also, the equilibrium constant can be related to the Gibbs free energy of formation ΔfG according to the following relations,
Relating equations and , we get:
exp− = [v]2
Using equation , the formula can be simplified into the following form where the enthalpy of formation can be directly calculated:
[v] = exp− + = A exp−, where A is a constant containing the entropic term.
Therefore, given a temperature and the formation energy of Schottky defect, the intrinsic Schottky defect concentration can be calculated from the above equation.
References
Chemical properties
Notation
Crystallographic defects | Kröger–Vink notation | [
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,619 | [
"Crystallographic defects",
"Symbols",
"Materials science",
"Crystallography",
"nan",
"Notation",
"Materials degradation"
] |
4,364,523 | https://en.wikipedia.org/wiki/Fission%20products%20%28by%20element%29 | This page discusses each of the main elements in the mixture of fission products produced by nuclear fission of the common nuclear fuels uranium and plutonium. The isotopes are listed by element, in order by atomic number.
Neutron capture by the nuclear fuel in nuclear reactors and atomic bombs also produces actinides and transuranium elements (not listed here). These are found mixed with fission products in spent nuclear fuel and nuclear fallout.
Neutron capture by materials of the nuclear reactor (shielding, cladding, etc.) or the environment (seawater, soil, etc.) produces activation products (not listed here). These are found in used nuclear reactors and nuclear fallout. A small but non-negligible proportion of fission events produces not two, but three fission products (not counting neutrons or subatomic particles). This ternary fission usually produces a very light nucleus such as helium (about 80% of ternary fissions produce an alpha particle) or hydrogen (most of the rest produce tritium or to a lesser extent deuterium and protium) as the third product. This is the main source of tritium from light water reactors. Another source of tritium is Helium-6 which immediately decays to (stable) Lithium-6. Lithium-6 produces tritium when hit by neutrons and is one of the main sources of commercially or militarily produced tritium. If the first or only step of nuclear reprocessing is an aqueous solution (as is the case in PUREX) this poses a problem as tritium contamination cannot be removed from water other than by costly isotope separation. Furthermore, a tiny fraction of the free neutrons involved in the operation of a nuclear reactor decay to a proton and a beta particle before they can interact with anything else. Given that protons from this source are indistinguishable from protons from ternary fission or radiolysis of coolant water, their overall proportion is hard to quantify.
Germanium-72, 73, 74, 76
If Germanium-75 is produced, it quickly decays to Arsenic. Germanium-76 is essentially stable, only decaying via extremely slow double beta decay to .
Arsenic-75
While arsenic presents no radiological hazard, it is extremely chemically toxic. If it is desired to get rid of arsenic (no matter its origin), thermal neutron irradiation of the only stable isotope will yield short lived which quickly decays to stable . If Arsenic is irradiated with sufficient fast neutrons to cause notable "knockout" (n,2n) or even (n,3n) reactions, Isotopes of germanium will be produced instead.
Selenium-77, 78, 79, 80, 82
Se-79, half-life of 327k years, is one of the long-lived fission products. Given the stability of its next lighter and heavier isotopes and the high cross section those isotopes exhibit for various neutron reactions, it is likely that the relatively low yield is due to Se-79 being destroyed in the reactor to an appreciable extent.
Bromine-81
The other stable isotope is "shadowed" by the long half life of its more neutron rich isobar .
Krypton-83, 84, 85, 86
Krypton-85, with a half-life 10.76 years, is formed by the fission process with
a fission yield of about 0.3%. Only 20% of the fission products of mass 85 become 85Kr itself; the rest passes through a short-lived nuclear isomer and then to stable 85Rb. If irradiated reactor fuel is reprocessed, this radioactive krypton may be released into the air. This krypton release can be detected and used as a means of detecting clandestine nuclear reprocessing. Strictly speaking, the stage which is detected is the dissolution of used nuclear fuel in nitric acid, as it is at this stage that the krypton and other fission gases like the more abundant xenon are released. Despite the industrial applications of Krypton-85 and the relatively high prices of both Krypton and Xenon, they are not currently extracted from spent fuel to any appreciable extent even though Krypton and Xenon both become solid at the temperature of liquid nitrogen and could thus be captured in a cold trap if the flue gas of a voloxidation process were cooled by liquid nitrogen.
Increase of fission gases above a certain limit can lead to fuel pin swelling and even puncture, so that fission gas measurement after discharging the fuel from the reactor is most important to make burn-up calculations, to study the nature of fuel inside the reactor, behaviour with pin materials, for effective utilization of fuel and also reactor safety. In addition to that, they are a nuisance in a nuclear reactor due to being neutron poisons, albeit not to the same extent as isotopes of xenon, another noble gas produced by fission.
Rubidium-85, 87
Rubidium-87 has such a long half life as to be essentially stable (longer than the age of the Earth). Rubidium-86 quickly decays to stable Strontium-86 if produced either directly, via (n,2n) reactions in Rubidium-87 or via neutron capture in Rubidium-85.
Strontium-88, 89, 90
The strontium radioisotopes are very important, as strontium is a calcium mimic which is incorporated in bone growth and therefore has a great ability to harm humans. On the other hand, this also allows 89Sr to be used in the open source radiotherapy of bone tumors. This tends to be used in palliative care to reduce the pain due to secondary tumors in the bones.
Strontium-90 is a strong beta emitter with a half-life of 28.8 years. Its fission product yield decreases as the mass of the fissile nuclide increases - fission of produces more than fission of with fission of in the middle. A map of 90Sr contamination around Chernobyl has been published by the IAEA. Due to its very small neutron absorption cross section, Strontium-90 is poorly suited for thermal neutron induced nuclear transmutation as a way of disposing of it.
Strontium-90 has been used in radioisotope thermoelectric generators (RTGs) in the past because of its relatively high power density (0.95 Wthermal/g for the metal, 0.46 Wthermal/g for the commonly used inert perovskite form Strontium titanate) and because it is easily extracted from spent fuel (both native Strontium metal and Strontium oxide react with water by forming soluble Strontium hydroxide). However, the increased availability of renewable energy for off-grid applications formerly served by RTGs as well as concern about orphan sources has led to a nigh-total abandonment of in RTGs. The few (largely space based) applications for RTGs that still exist are largely supplied by despite its higher cost, as it has a higher power density, longer half life and is easier shielded since it is an alpha emitter while Strontium-90 is a beta emitter.
Yttrium-89 to 91
The only stable yttrium isotope, 89Y, will be found with yield somewhat less than 1% in a fission product mixture which has been allowed to age for months or years, as the next-longest lived yttrium isotopes have half-lives of only 107 days (88Y) or 59 days (91Y). However, a small amount of yttrium-90 will be found in secular equilibrium with its parent strontium-90 unless the two elements are separated from each other.
90Sr decays into 90Y which is a beta emitter with a half-life of 2.67 days.
90Y is sometimes used for medical purposes and can be obtained either by the neutron activation of stable 89Y or by using a device similar to a technetium cow.
As the half lives of the unstable Yttrium isotopes are low ( being the longest at 106 days), yttrium extracted from strontium-free moderately aged spent fuel has negligible radioactivity. However, the strong gamma emitter will be present as long as its parent nuclide is. Should a nonradioactive sample of Yttrium be desired, care must be taken to remove all traces of strontium and sufficient time to let the short lived Y-90 (64 hours half life) decay must be allowed before the product can be used.
Zirconium-90 to 96
A significant amount of zirconium is formed by the fission process; some of this consists of short-lived radionuclides (95Zr and 97Zr which decay to molybdenum), while almost 10% of the fission products mixture after years of decay consists of five stable or nearly stable isotopes of zirconium plus 93Zr with a halflife of 1.53 million years which is one of the 7 major long-lived fission products. Zirconium is commonly used in cladding of fuel rods due to its low neutron cross section. However, a small share of this zirconium does capture neutrons and contributes to the overall inventory of radioactive zirconium isotopes. Zircalloy cladding is not commonly reused and neither is fission product zirconium, which could be used in cladding as its relatively weak radioactivity would be of no major concern inside a nuclear reactor. Despite its high yield and long live, Zr-93 is generally not deemed to be of major concern as it is not chemically mobile and emits little radiation.
In PUREX plants the zirconium (regardless of source or isotope) sometimes forms a third phase which can be a disturbance in the plant. The third phase is the term in solvent extraction given to a third layer (such as foam and/or emulsion) which forms from the two layers in the solvent extraction process. The zirconium forms the third phase by forming small particles which stabilise the emulsion which is the third phase.
Zirconium-90 mostly forms by successive beta decays out of Strontium-90. A nonradioactive Zirconium sample can be extracted from spent fuel by extracting Strontium-90 and allowing enough of it to decay (e.g. In an RTG). The Zirconium can then be separated from the remaining strontium leaving a very isotopically pure Zr-90 sample.
Niobium-95
Niobium-95, with a half-life of 35 days, is initially present as a fission product. The only stable isotope of niobium has mass number 93, and fission products of mass 93 first decay to long-lived zirconium-93 (half-life 1.53 Ma). Niobium-95 will decay to molybdenum-95 which is stable.
Molybdenum-95, 97, 98, 99, 100
The fission product mixture contains significant amounts of molybdenum. Molybdenum-99 is of enormous interest to nuclear medicine as the parent nuclide to but its short half life means it'll usually be decayed long before the spent fuel is reprocessed. can be produced both by fission followed by immediate reprocessing (usually only done in small scale research reactors) or in particle accelerators. As Molybdenum-100 only decays extremely slowly via double beta decay (half life longer than the age of the universe) the molybdenum content of spent fuel will be essentially stable after a few days have passed to allow the Molybdenum-99 to decay.
Technetium-99
99Tc, half-life 211k years, is produced at a yield of about 6% per fission; see also the main fission products page. It is also produced (via the short lived nuclear isomer Technetium-99m) as a decay product of Molybdenum-99. Technetium is particularly mobile in the environment as it forms negatively charged pertechnetate-ions and it presents the biggest radiological hazard among the long lived fission products. Despite being a metal, Technetium usually doesn't form positively charged ions, but Technetium halides like Technetium hexafluoride exist. TcF6 is a nuisance in uranium enrichment as its boiling point () is very close to that of uranium hexafluoride (). The issue is known to enrichment facilities because spontaneous fission also yields small amounts of Technetium (which will be in secular equilibrium with its parent nuclides in natural uranium) but if fluoride volatility is employed for reprocessing, a significant share of the "uranium" fraction of fractional distillation will be contaminated with Technetium requiring a further separation step.
Technetium-99 is suitable for nuclear transmutation by slow neutrons as it has a sufficient thermal neutron cross section and as it has no known stable isotopes. Under neutron irradiation, Tc-99 forms Tc-100 which quickly decays to stable a valuable platinum group metal.
Ruthenium-101 to 106
Plenty of radioactive ruthenium-103, ruthenium-106, and stable ruthenium are formed by the fission process. The ruthenium in PUREX raffinate can become oxidized to form volatile ruthenium tetroxide which forms a purple vapour above the surface of the aqueous liquor. The ruthenium tetroxide is very similar to osmium tetroxide; the ruthenium compound is a stronger oxidant which enables it to form deposits by reacting with other substances. In this way the ruthenium in a reprocessing plant is very mobile, difficult to stabilize, and can be found in odd places. It has been called extremely troublesome and has a notorious reputation as an especially difficult product to handle during reprocessing. Voloxidation combined with cold trap collection of the flue gases could recover the volatile ruthenium tetroxide before it can become a nuisance in further processing. After the radioactive isotopes have had time to decay, recovered ruthenium could be sold at its relatively high market value.
In addition, the ruthenium in PUREX raffinate forms a large number of nitrosyl complexes which makes the chemistry of the ruthenium very complex. The ligand exchange rate at ruthenium and rhodium tends to be long, hence it can take a long time for a ruthenium or rhodium compound to react.
At Chernobyl, during the fire, the ruthenium became volatile and behaved differently from many of the other metallic fission products. Some of the particles which were emitted by the fire were very rich in ruthenium.
As the longest-lived radioactive isotope ruthenium-106 has a half-life of only 373.59 days, it has been suggested that the ruthenium and palladium in PUREX raffinate should be used as a source of the metals after allowing the radioactive isotopes to decay. After ten half life cycles have passed over 99.96% of any radioisotope is stable. For Ru-106 this is 3,735.9 days or about 10 years.
Rhodium-103, 105
While less rhodium than ruthenium and palladium is formed (around 3.6% yield), the mixture of fission products still contains a significant amount of this metal. Due to the high prices of ruthenium, rhodium, and palladium, some work has been done on the separation of these metals to enable them to be used at a later date. Because of the possibility of the metals being contaminated by radioactive isotopes, they are not suitable for making consumer products such as jewellery. However, this source of the metals could be used for catalysts in industrial plants such as petrochemical plants.
A dire example of people being exposed to radiation from contaminated jewellery occurred in the United States. It is thought that gold seeds used to contain radon were recycled into jewellery. The gold indeed did contain radioactive decay products of 222Rn.
Some other rhodium isotopes exist as "transitory states" of ruthenium decaying before further decaying towards stable isotopes of Palladium. If the low level radioactivity of Palladium (see below) is deemed excessive - for example for use as an investment or jewelry - either of its predecessors can be extracted from relatively "young" spent fuel and allowed to decay before extracting the stable end-product of the decay series.
Palladium-105 to 110
Much palladium forms during the fission process. In nuclear reprocessing, not all of the fission palladium dissolves; also some palladium that dissolves at first comes out of solution later. Palladium-rich dissolver fines (particles) are often removed as they interfere with the solvent extraction process by stabilising the third phase.
The fission palladium can separate during the process in which the PUREX raffinate is combined with glass and heated to form the final high level waste form. The palladium forms an alloy with the fission tellurium. This alloy can separate from the glass.
107Pd is the only long-living radioactive isotope among the fission products and its beta decay has a long half life and low energy, this allows industrial use of extracted palladium without isotope separation.
Palladium-109 will most likely have decayed to stable silver-109 by the time reprocessing happens. Before reaching silver-109, a nuclear isomer will be reached; . However, unlike for there is no current use for .
Silver-109, 111
While the radioactive silver isotopes that are produced quickly decay away leaving only stable silver, extracting it for use is not economical, unless as byproduct of platinum group metal extraction.
Cadmium-111 to 116
Cadmium is a strong neutron poison and in fact control rods are often made out of cadmium, making the accumulation of cadmium in fuel of particular concern for the maintenance of stable neutron economy. Cadmium is also a chemically poisonous heavy metal, but given the number of neutron absorptions required for transmutation, it is not a high priority target for deliberate transmutation.
Indium-115
While Indium-115 is very slightly radioactive, its half life is longer than the age of the universe and indeed a typical sample of Indium on earth will contain more of this "unstable" isotope than of "stable" Indium-113.
Tin-117 to 126
In a normal thermal reactor, tin-121m has a very low fission product yield; thus, this isotope is not a significant contributor to nuclear waste. Fast fission or fission of some heavier actinides will produce 121mSn at higher yields. For example, its yield from U-235 is 0.0007% per thermal fission and 0.002% per fast fission.
Antimony-121, 123, 124, 125
Antimony-125 decays with a half life of over two years to which itself decays with a half life of almost two months via isomeric transition to the ground state. While its relatively short half life and the significant gamma emissions (144.77 keV) of its daughter nuclide make usage in an RTG less attractive, Sb-125 could deliver a relatively high power density of 3.4 Wthermal/g.
Fluoride volatility can recover antimony as the mildly volatile (solid at room temperature) Antimony trifluoride or the more volatile (boiling point ) Antimony pentafluoride.
Tellurium-125 to 132
Tellurium-128 and -130 are essentially stable. They only decay by double beta decay, with half lives >1020 years. They constitute the major fraction of natural occurring tellurium at 32 and 34% respectively.
Tellurium-132 and its daughter 132I are important in the first few days after a criticality. It was responsible for a large fraction of the dose inflicted on workers at Chernobyl in the first week.
The isobar forming 132Te/132I is: Tin-132 (half-life 40 s) decaying to antimony-132 (half-life 2.8 minutes) decaying to tellurium-132 (half-life 3.2 days) decaying to iodine-132 (half-life 2.3 hours) which decays to stable xenon-132.
The creation of tellurium-126 is delayed by the long half-life (230 k years) of tin-126.
Iodine-127, 129, 131
131I, with a half-life of 8 days, is a hazard from nuclear fallout because iodine concentrates in the thyroid gland. See also Radiation effects from Fukushima Daiichi nuclear disaster#Iodine-131 and
Downwinders#Nevada.
In common with 89Sr, 131I is used for the treatment of cancer. A small dose of 131I can be used in a thyroid function test while a large dose can be used to destroy the thyroid cancer. This treatment will also normally seek out and destroy any secondary tumor which arose from a thyroid cancer. Much of the energy from the beta emission from the 131I will be absorbed in the thyroid, while the gamma rays are likely to be able to escape from the thyroid to irradiate other parts of the body.
Large amounts of 131I was released during an experiment named the Green Run in which fuel which had only been allowed to cool for a short time after irradiation was reprocessed in a plant which had no iodine scrubber in operation.
129I, with a half-life almost a billion times as long, is a long-lived fission product. It is among the most troublesome because it accumulates in a relatively small organ (the thyroid) where even its comparatively low radiation dose can cause great damage as it has a long biological half life. For this reason, Iodine is often considered for transmutation despite the presence of stable in spent fuel. In the thermal neutron spectrum, more Iodine-129 is destroyed than newly created since Iodine-128 is short lived and the isotope ratio is in favor of . Depending on the design of the transmutation apparatus, care must be taken as Xenon, the product of Iodine's beta decay, is both a strong neutron poison and a gas that is nigh impossible to chemically "fix" in solid compounds, so it will either escape to the outside air or put pressure on the vessel containing the transmutation target.
127I is stable, the only one of the isotopes of iodine that is nonradioactive. It makes up only about of the iodine in spent fuel, with I-129 about .
Xenon-131 to 136
In reactor fuel, the fission product xenon tends to migrate to form bubbles in the fuel. As caesium 133, 135, and 137 are formed by the beta particle decay of the corresponding xenon isotopes, this causes the caesium to become physically separated from the bulk of the uranium oxide fuel.
Because 135Xe is a potent nuclear poison with the largest cross section for thermal neutron absorption, the buildup of 135Xe in the fuel inside a power reactor can lower the reactivity greatly. If a power reactor is shut down or left running at a low power level, then large amounts of 135Xe can build up through decay of 135I. When the reactor is restarted or the low power level is increased significantly, 135Xe will be quickly consumed through neutron capture reactions and the reactivity of the core will increase. Under some circumstances, control systems may not be able to respond quickly enough to manage an abrupt reactivity increase as the built-up 135Xe burns off. It is thought that xenon poisoning was one of the factors which led to the power surge which damaged the Chernobyl reactor core.
Caesium-133, 134, 135, 137
Caesium-134 is found in spent nuclear fuel but is not produced by nuclear weapon explosions, as it is only formed by neutron capture on stable Cs-133, which is only produced by beta decay of Xe-133 with a half-life of 3 days. Cs-134 has a half-life of 2 years and may be a major source of gamma radiation in the first 20 years after discharge.
Caesium-135 is a long-lived fission product with much weaker radioactivity. Neutron capture inside the reactor transmutes much of the xenon-135 that would otherwise decay to Cs-135.
Caesium-137, with a half-life of 30 years, is the main medium-lived fission product, along with Sr-90.
Cs-137 is the primary source of penetrating gamma radiation from spent fuel from 10 years to about 300 years after discharge.
It is the most significant radioisotope left in the area around Chernobyl.
Barium-138, 139, 140
Barium is formed in large amounts by the fission process. A short-lived barium isotope was confused with radium by some early workers. They were bombarding uranium with neutrons in an attempt to form a new element. But instead they caused fission which generated a large amount of radioactivity in the target. Because the chemistry of barium and radium the two elements could be coseparated by for instance a precipitation with sulfate anions. Because of this similarity of their chemistry the early workers thought that the very radioactive fraction which was separated into the "radium" fraction contained a new isotope of radium. Some of this early work was done by Otto Hahn and Fritz Strassmann.
Lanthanides (lanthanum-139, cerium-140 to 144, neodymium-142 to 146, 148, 150, promethium-147, and samarium-149, 151, 152, 154)
A great deal of the lighter lanthanides (lanthanum, cerium, neodymium, and samarium) are formed as fission products. In Africa, at Oklo where the natural nuclear fission reactor operated over a billion years ago, the isotopic mixture of neodymium is not the same as 'normal' neodymium, it has an isotope pattern very similar to the neodymium formed by fission.
In the aftermath of criticality accidents, the level of 140La is often used to determine the fission yield (in terms of the number of nuclei which underwent fission).
Samarium-149 is the second most important neutron poison in nuclear reactor physics. Samarium-151, produced at lower yields, is the third most abundant medium-lived fission product but emits only weak beta radiation. Both have high neutron absorption cross sections, so that much of them produced in a reactor are later destroyed there by neutron absorption.
Lanthanides are a problem in nuclear reprocessing because they are chemically very similar to actinides and most reprocessing aims at separating some or all of the actinides from the fission products or at least the neutron poisons among them.
External links
The Live Chart of Nuclides – IAEA Color-map of fission product yields, and detailed data by click on a nuclide.
Periodic Table with isotope decay chain displays. Click on element, and then isotope mass number to see the decay chain (link to uranium 235).
References
Inorganic chemistry
Nuclear chemistry
Nuclear physics
Nuclear technology | Fission products (by element) | [
"Physics",
"Chemistry"
] | 5,720 | [
"Nuclear fission",
"Nuclear chemistry",
"Nuclear technology",
"Fission products",
"Nuclear fallout",
"nan",
"Nuclear physics"
] |
4,365,725 | https://en.wikipedia.org/wiki/National%20Compact%20Stellarator%20Experiment | The National Compact Stellarator Experiment, NCSX in short, was a magnetic fusion energy experiment based on the stellarator design being constructed at the Princeton Plasma Physics Laboratory (PPPL).
NCSX was one of a number of new stellarator designs from the 1990s that arose after studies illustrated new geometries that offered better performance than the simpler machines of the 1950s and 1960s. Compared to the more common tokamak, these were much more difficult to design and build, but produced far more stable plasma, the main problem with successful fusion.
The design proved to be too difficult to build, repeatedly running over its budget and timelines. The project was eventually cancelled on 22 May 2008, having spent over $70 M.
Wendelstein 7-X explores many of the same concepts that NCSX intended to.
History
Early stellarators
Stellarators are one of the first fusion power concepts, originally designed by Princeton astrophysicist Lyman Spitzer in 1952 while riding the chairlifts at Aspen. Spitzer, considering the motion of plasmas in the stars, realized that any simple arrangements of magnets would not confine a plasma inside a machine - the plasma would drift across the fields and eventually strike the vessel. His solution was simple; by bending the machine through a 180 degree twist, forming a figure-eight instead of a donut, the plasma would alternately find itself on the inside or outside of the vessel, drifting in opposite directions. The cancellation of net drift would not be perfect, but on paper, it appeared that the delay in drift rates was more than enough to allow the plasma to reach fusion conditions.
In practice, this proved not to be. A problem seen in all fusion reactor designs of the era was that the plasma ions were drifting much faster than classical theory predicted, hundreds to thousands of times faster. Designs that suggested stability on the order of seconds turned into machines that were stable for microseconds at best. By the mid-1960s the entire fusion energy field appeared stalled. It was only the 1968 introduction of the tokamak design that rescued the field; Soviet machines were performing at least an order of magnitude better than western designs, although still far short of practical values. The improvement was so dramatic that work on other designs largely ended as teams around the world began to study the tokamak approach. This included the latest stellarator designs; the Model C had only recently started operations, and was rapidly converted into the Symmetric Tokamak.
By the late 1980s it was clear that while the tokamak was a great step forward, it also introduced new problems. In particular, the plasma current the tokamak used for stabilization and heating was itself a source of instabilities as the current grew. Much of the subsequent 30 years of tokamak development has focused on ways to increase this current to the levels required to sustain useful fusion while ensuring that same current does not cause the plasma to break up.
Compact stellarators
As the magnitude of the problem with the tokamak became evident, fusion teams around the world began to take a fresh look at other design concepts. Among a number of ideas noted during this process, the stellarator in particular appeared to have a number of potential changes that would greatly improve its performance.
The basic idea of the stellarator was to use the layout of the magnets to cancel out ion drift, but the simple designs of the 1950s did not do this to the degree needed. A greater problem were the instabilities and collisional effects that greatly increased the diffusion rates. In the 1980s it was noted that one way to improve tokamak performance was to use non-circular cross-sections for the plasma confinement area; ions moving in these non-uniform areas would mix and break up the formation of large-scale instabilities. Applying the same logic to the stellarator appeared to offer the same advantages. Yet, as the stellarator lacked, or lowered, the plasma current, the plasma would be more stable from the start.
When one considers the magnet layout needed to achieve both goals, a twisted path around the circumference of the device as well as many smaller twists and mixes along the way, the design becomes extremely complex, well beyond the abilities of conventional design tools. It was only through the use of massively parallel computers that the designs could be studied in depth, and suitable magnet designs created. The result was a very compact device, significantly smaller outside than a classical design for any given volume of plasma, with a low aspect ratio. Lower aspect ratios are highly desirable, because they allow a machine of any given power to be smaller, which lowers construction costs.
By the late 1990s the studies into new stellarator designs had reached a suitable point for the construction of a machine using these concepts. In comparison to the stellarators of the 1960s, the new machines could use superconducting magnets for much higher field strengths, be only slightly larger than the Model C yet have far larger plasma volume, and have a plasma area inside that varied from circular to planar and back while twisting several times.
NCSX design
Plasma details
Major radius : 1.4m, Aspect ratio : 4.4,
Magnetic field : 1.2 T - 1.7 T (Up to 2 T on axis for 0.2s)
quasi-axisymmetric field, 3 field periods in all. Aims for beta > 0.04.
Magnet coils
18 modular coils (6 each of types A, B, C) of wound copper wire, cooled with liquid nitrogen (LN2),
18 toroidal coils, solid copper cooled with LN2,
6 pairs of poloidal field coils, solid copper cooled with LN2,
48 trim coils.
The 18 modular coils have a complicated 3D shape, ~ 9 different curves in different planes. Some of the coils would need 15 minutes to re-cool between high I2t plasma runs.
Plasma heating Because the stellarator lacks the tokamak's plasma current as a form of heating, heating the plasma is accomplished with external devices. Up to 12 MW of external heating power would have been available to the NCSX chamber, consisting of 6 MW from tangential neutral beam injection, and 6 MW from radio-frequency (RF) heating (essentially a microwave oven). Up to 3 MW of electron cyclotron heating would also have been available in future iterations of the design.
Baseline total project cost of $102M for completion date of July 2009.
First contracts placed in 2004.
NCSX construction
With the design largely complete, the PPPL began the process of building such a machine, the NCSX, which would test all of these concepts. The design used eighteen complicated hand-wound magnets, which then had to be assembled into a machine where the maximum variation from the perfect placement was no more than across the entire device. The vacuum vessel surrounding all of this was likewise very complex, with the added complication of carrying all of the wiring to feed power to the magnets.
The assembly tolerances were very tight and required state of the art use of metrology systems including Laser Tracker and photogrammetry equipment. $50 million of additional funding was needed, spread over the next 3 years, to complete the assembly within tolerance requirements. Components for the Stellarator were measured with 3d laser scanning, and inspected to design models at multiple stages in the manufacturing process.
The required tolerances could not be achieved; As the modules were assembled, parts were found to be in contact, would sag once installed, and other unexpected effects made alignment very difficult. Fixes were worked into the design, but each one further delayed the completion and required more funding. (The 2008 cost estimate was $170M with an August 2013 scheduled completion.) Eventually a go/no-go condition was imposed, and when the goal was not met on budget, the project was cancelled.
Legacy
Due to its cancellation in 2008, the project has been cited as a case study of the hypothetical demon of Bureaucratic Chaos, which "blocks good things from happening" at the United States Department of Energy. Its fate is reminiscent of other Department of Energy projects, such as the Mirror Fusion Test Facility, which was constructed but never used, and the Superconducting Super Collider, which cost $2 billion prior to its cancellation.
See also
Wendelstein 7-X
Helically Symmetric Experiment
References
External links
NCSX homepage
Progress in NCSX Construction Reiersen. 2007
Engineering Analysis & Design Confirmation Overview — P. Heitzenroeder. Oct 2008 analyses forces and strains on structure and modular coils
Modular Coil Manufacturing — J. Chrzanowski . Oct 2008 Copper in liquid nitrogen
Conventional Coils — M. Kalish Oct 2008 Toroidal and poloidal and trim coils.
Stellarators
Plasma physics facilities
Princeton Plasma Physics Laboratory | National Compact Stellarator Experiment | [
"Physics"
] | 1,799 | [
"Plasma physics facilities",
"Plasma physics"
] |
3,203,851 | https://en.wikipedia.org/wiki/Bragg%20plane | In physics, a Bragg plane is a plane in reciprocal space which bisects a reciprocal lattice vector, , at right angles. The Bragg plane is defined as part of the Von Laue condition for diffraction peaks in x-ray diffraction crystallography.
Considering the adjacent diagram, the arriving x-ray plane wave is defined by:
Where is the incident wave vector given by:
where is the wavelength of the incident photon. While the Bragg formulation assumes a unique choice of direct lattice planes and specular reflection of the incident X-rays, the Von Laue formula only assumes monochromatic light and that each scattering center acts as a source of secondary wavelets as described by the Huygens principle. Each scattered wave contributes to a new plane wave given by:
The condition for constructive interference in the direction is that the path difference between the photons is an integer multiple (m) of their wavelength. We know then that for constructive interference we have:
where . Multiplying the above by we formulate the condition in terms of the wave vectors, and :
Now consider that a crystal is an array of scattering centres, each at a point in the Bravais lattice. We can set one of the scattering centres as the origin of an array. Since the lattice points are displaced by the Bravais lattice vectors, , scattered waves interfere constructively when the above condition holds simultaneously for all values of which are Bravais lattice vectors, the condition then becomes:
An equivalent statement (see mathematical description of the reciprocal lattice) is to say that:
By comparing this equation with the definition of a reciprocal lattice vector, we see that constructive interference occurs if is a vector of the reciprocal lattice. We notice that and have the same magnitude, we can restate the Von Laue formulation as requiring that the tip of incident wave vector, , must lie in the plane that is a perpendicular bisector of the reciprocal lattice vector, . This reciprocal space plane is the Bragg plane.
See also
X-ray crystallography
Reciprocal lattice
Bravais lattice
Powder diffraction
Kikuchi line
Brillouin zone
References
Crystallography
Planes (geometry)
Fourier analysis
Lattice points
Diffraction | Bragg plane | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 443 | [
"Spectrum (physical sciences)",
"Lattice points",
"Mathematical objects",
"Infinity",
"Materials science",
"Crystallography",
"Diffraction",
"Condensed matter physics",
"Planes (geometry)",
"Spectroscopy",
"Number theory"
] |
3,205,518 | https://en.wikipedia.org/wiki/ASTOS | ASTOS is a tool dedicated to mission analysis, Trajectory optimization, vehicle design and simulation for space scenarios, i.e. launch, re-entry missions, orbit transfers, Earth observation, navigation, coverage and re-entry safety assessments. It solves Aerospace problems with a data driven interface and automatic initial guesses. Since 1989, with the support of the European Space Agency, it has developed, and improved this trajectory optimization environment to compute optimal trajectories for a variety of complex multi-phase Optimal control problems. ASTOS is being extensively used at ESA and aerospace industry community to calculate mission analysis, optimal launch and entry trajectories and was one of the tools used by ESA to assess the risk due to the ATV 'Jules Verne' re-entry. ASTOS is compatible with Windows and Linux platforms and is maintained and commercialized by Astos Solutions GmbH.
History
The development of ASTOS (formerly named ALTOS) started in 1989 at the DLR in Oberpfaffenhofen and MBB (now Astrium).
In 1991 the Institute of Flight Mechanics and Control (IFR) at the University of Stuttgart under the head of Prof. Klaus Well took the responsibility for the development of ASTOS. In 1999 the commercialization of ASTOS began. In the period 2001-2006 ASTOS was sold by Technology Transfer Initiative of the University of Stuttgart (TTI). Since September 2006, the newly founded company Astos Solutions GmbH is responsible for development and sales of ASTOS.
Projects
ASTOS is being extensively used in aerospace agencies and industry since 1998, hereafter a not complete list of project is presented where the software was involved during the design or accomplishment of the space mission.
Performance map of conventional launch vehicle: Ariane 6, Ariane 5, Vega, Soyuz from Guiana Space Centre, several FLPP concepts, VLM
Feasibility study of reusable launch vehicle: Hopper, Skylon, SpaceLiner, Fast 20XX ALPHA.
Earth Atmospheric re-entry: X-38, Atmospheric Reentry Demonstrator, IXV, EXPERT.
Safety aspect related to the ATV re-entry.
Planetary re-entry: Beagle 2, ExoMars, Huygens.
Orbit transfer and Space rendezvous: ConeXpress, DLR DEOS, OHB SE Electra.
Sounding rocket: SHEFEX II and III, Maser11.
Mission Analysis: STE-QUEST,
See also
Trajectory optimization
General Mission Analysis Tool
External links
Astos Solutions website
Website of the Institute of Flight Mechanics and Control, University of Stuttgart
Astronomy software
Mathematical software
Physics software
Mathematical optimization software | ASTOS | [
"Physics",
"Astronomy",
"Mathematics"
] | 528 | [
"Works about astronomy",
"Physics software",
"Computational physics",
"Astronomy software",
"Mathematical software"
] |
3,205,596 | https://en.wikipedia.org/wiki/Neutron%20poison | In applications such as nuclear reactors, a neutron poison (also called a neutron absorber or a nuclear poison) is a substance with a large neutron absorption cross-section. In such applications, absorbing neutrons is normally an undesirable effect. However, neutron-absorbing materials, also called poisons, are intentionally inserted into some types of reactors in order to lower the high reactivity of their initial fresh fuel load. Some of these poisons deplete as they absorb neutrons during reactor operation, while others remain relatively constant.
The capture of neutrons by short half-life fission products is known as reactor poisoning; neutron capture by long-lived or stable fission products is called reactor slagging.
Transient fission product poisons
Some of the fission products generated during nuclear reactions have a high neutron absorption capacity, such as xenon-135 (microscopic cross-section σ = 2,000,000 barns (b); up to 3 million barns in reactor conditions) and samarium-149 (σ = 74,500 b). Because these two fission product poisons remove neutrons from the reactor, they will affect the thermal utilization factor and thus the reactivity. The poisoning of a reactor core by these fission products may become so serious that the chain reaction comes to a standstill.
Xenon-135 in particular tremendously affects the operation of a nuclear reactor because it is the most powerful known neutron poison. The inability of a reactor to be restarted due to the buildup of xenon-135 (reaches a maximum after about 10 hours) is sometimes referred to as xenon precluded start-up. The period of time in which the reactor is unable to override the effects of xenon-135 is called the xenon dead time or poison outage. During periods of steady state operation, at a constant neutron flux level, the xenon-135 concentration builds up to its equilibrium value for that reactor power in about 40 to 50 hours. When the reactor power is increased, xenon-135 concentration initially decreases because the burn up is increased at the new, higher power level. Thus, the dynamics of xenon poisoning are important for the stability of the flux pattern and geometrical power distribution, especially in physically large reactors.
Because 95% of the xenon-135 production is from iodine-135 decay, which has a 6- to 7-hour half-life, the production of xenon-135 remains constant; at this point, the xenon-135 concentration reaches a minimum. The concentration then increases to the equilibrium for the new power level in the same time, roughly 40 to 50 hours. The magnitude and the rate of change of concentration during the initial 4 to 6 hour period following the power change is dependent upon the initial power level and on the amount of change in power level; the xenon-135 concentration change is greater for a larger change in power level. When reactor power is decreased, the process is reversed.
Because samarium-149 is not radioactive and is not removed by decay, it presents problems somewhat different from those encountered with xenon-135. The equilibrium concentration (and thus the poisoning effect) builds to an equilibrium value during reactor operation in about 500 hours (about three weeks), and since samarium-149 is stable, the concentration remains essentially constant during reactor operation. Another problematic isotope that builds up is gadolinium-157, with microscopic cross-section of σ = 200,000 b.
Accumulating fission product poisons
There are numerous other fission products that, as a result of their concentration and thermal neutron absorption cross section, have a poisoning effect on reactor operation. Individually, they are of little consequence, but taken together they have a significant effect. These are often characterized as lumped fission product poisons and accumulate at an average rate of 50 barns per fission event in the reactor. The buildup of fission product poisons in the fuel eventually leads to loss of efficiency, and in some cases to instability. In practice, buildup of reactor poisons in nuclear fuel is what determines the lifetime of nuclear fuel in a reactor: long before all possible fissions have taken place, buildup of long-lived neutron-absorbing fission products damps out the chain reaction. This is the reason that nuclear reprocessing is a useful activity: solid spent nuclear fuel contains about 97% of the original fissionable material present in newly manufactured nuclear fuel. Chemical separation of the fission products restores the fuel so that it can be used again.
Other potential approaches to fission product removal include solid but porous fuel which allows escape of fission products and liquid or gaseous fuel (molten salt reactor, aqueous homogeneous reactor). These ease the problem of fission product accumulation in the fuel, but pose the additional problem of safely removing and storing the fission products. Some fission products are themselves stable or quickly decay to stable nuclides. Of the (roughly half a dozen each) medium lived and long-lived fission products, some, like , are proposed for nuclear transmutation precisely because of their non-negligible capture cross section.
Other fission products with relatively high absorption cross sections include 83Kr, 95Mo, 143Nd, 147Pm. Above this mass, even many even-mass number isotopes have large absorption cross sections, allowing one nucleus to serially absorb multiple neutrons.
Fission of heavier actinides produces more of the heavier fission products in the lanthanide range, so the total neutron absorption cross section of fission products is higher.
In a fast reactor the fission product poison situation may differ significantly because neutron absorption cross sections can differ for thermal neutrons and fast neutrons. In the RBEC-M Lead-Bismuth Cooled Fast Reactor, the fission products with neutron capture more than 5% of total fission products capture are, in order, 133Cs, 101Ru, 103Rh, 99Tc, 105Pd and 107Pd in the core, with 149Sm replacing 107Pd for 6th place in the breeding blanket.
Decay poisons
In addition to fission product poisons, other materials in the reactor decay to materials that act as neutron poisons. An example of this is the decay of tritium to helium-3. Since tritium has a half-life of 12.3 years, normally this decay does not significantly affect reactor operations because the rate of decay of tritium is so slow. However, if tritium is produced in a reactor and then allowed to remain in the reactor during a prolonged shutdown of several months, a sufficient amount of tritium may decay to helium-3 to add a significant amount of negative reactivity. Any helium-3 produced in the reactor during a shutdown period will be removed during subsequent operation by a neutron-proton reaction. Pressurized heavy water reactors will produce small but notable amounts of tritium through neutron capture in the heavy water moderator, which will likewise decay to helium-3. Given the high market value of both tritium and helium-3, tritium is periodically removed from the moderator/coolant of some CANDU reactors and sold at a profit. Water boration (the addition of boric acid to the moderator/coolant) which is commonly employed in pressurized light water reactors also produces non-negligible amounts of tritium via the successive reactions (n, α) and (n,α n) or (in the presence of fast neutrons) (n,2n) and subsequently (n,α). Fast neutrons also produce tritium directly from boron via (n,2α). All nuclear fission reactors produce a certain quantity of tritium via ternary fission.
Control poisons
During operation of a reactor the amount of fuel contained in the core decreases monotonically. If the reactor is to operate for a long period of time, fuel in excess of that needed for exact criticality must be added when the reactor is fueled. The positive reactivity due to the excess fuel must be balanced with negative reactivity from neutron-absorbing material. Movable control rods containing neutron-absorbing material is one method, but control rods alone to balance the excess reactivity may be impractical for a particular core design as there may be insufficient room for the rods or their mechanisms, namely in submarines, where space is particularly at a premium.
Burnable poisons
To control large amounts of excess fuel reactivity without control rods, burnable poisons are loaded into the core. Burnable poisons are materials that have a high neutron absorption cross section that are converted into materials of relatively low absorption cross section as the result of neutron absorption. Due to the burn-up of the poison material, the negative reactivity of the burnable poison decreases over core life. Ideally, these poisons should decrease their negative reactivity at the same rate that the fuel's excess positive reactivity is depleted.
Fixed burnable poisons are generally used in the form of compounds of boron or gadolinium that are shaped into separate lattice pins or plates, or introduced as additives to the fuel. Since they can usually be distributed more uniformly than control rods, these poisons are less disruptive to the core's power distribution. Fixed burnable poisons may also be discretely loaded in specific locations in the core in order to shape or control flux profiles to prevent excessive flux and power peaking near certain regions of the reactor. Current practice however is to use fixed non-burnable poisons in this service.
Non-burnable poison
A non-burnable poison is one that maintains a constant negative reactivity worth over the life of the core. While no neutron poison is strictly non-burnable, certain materials can be treated as non-burnable poisons under certain conditions. One example is hafnium. It has five stable isotopes, through , which can all absorb neutrons, so the first four are chemically unchanged by absorbing neutrons. (A final absorption produces , which beta-decays to .) This absorption chain results in a long-lived burnable poison which approximates non-burnable characteristics.
Soluble poisons
Soluble poisons, also called chemical shim, produce a spatially uniform neutron absorption when dissolved in the water coolant. The most common soluble poison in commercial pressurized water reactors (PWR) is boric acid, which is often referred to as soluble boron. The boric acid in the coolant decreases the thermal utilization factor, causing a decrease in reactivity. By varying the concentration of boric acid in the coolant, a process referred to as boration and dilution, the reactivity of the core can be easily varied. If the boron concentration is increased (boration), the coolant/moderator absorbs more neutrons, adding negative reactivity. If the boron concentration is reduced (dilution), positive reactivity is added. The changing of boron concentration in a PWR is a slow process and is used primarily to compensate for fuel burnout or poison buildup.
The variation in boron concentration allows control rod use to be minimized, which results in a flatter flux profile over the core than can be produced by rod insertion. The flatter flux profile occurs because there are no regions of depressed flux like those that would be produced in the vicinity of inserted control rods. This system is not in widespread use because the chemicals make the moderator temperature reactivity coefficient less negative. All commercial PWR types operating in the US (Westinghouse, Combustion Engineering, and Babcock & Wilcox) employ soluble boron to control excess reactivity. US Navy reactors and Boiling Water Reactors do not. One known issue of boric acid is that it increases corrosion risks, as illustrated in a 2002 incident at Davis-Besse Nuclear Power Station.
Soluble poisons are also used in emergency shutdown systems. During SCRAM the operators can inject solutions containing neutron poisons directly into the reactor coolant. Various aqueous solutions, including borax and gadolinium nitrate (Gd(NO3)3·H2O), are used.
References
Bibliography
Nuclear technology
Nuclear reactor safety | Neutron poison | [
"Physics"
] | 2,469 | [
"Nuclear technology",
"Nuclear physics"
] |
3,206,099 | https://en.wikipedia.org/wiki/Biological%20half-life | Biological half-life (elimination half-life, pharmacological half-life) is the time taken for concentration of a biological substance (such as a medication) to decrease from its maximum concentration (Cmax) to half of Cmax in the blood plasma. It is denoted by the abbreviation .
This is used to measure the removal of things such as metabolites, drugs, and signalling molecules from the body. Typically, the biological half-life refers to the body's natural detoxification (cleansing) through liver metabolism and through the excretion of the measured substance through the kidneys and intestines. This concept is used when the rate of removal is roughly exponential.
In a medical context, half-life explicitly describes the time it takes for the blood plasma concentration of a substance to halve (plasma half-life) its steady-state when circulating in the full blood of an organism. This measurement is useful in medicine, pharmacology and pharmacokinetics because it helps determine how much of a drug needs to be taken and how frequently it needs to be taken if a certain average amount is needed constantly. By contrast, the stability of a substance in plasma is described as plasma stability. This is essential to ensure accurate analysis of drugs in plasma and for drug discovery.
The relationship between the biological and plasma half-lives of a substance can be complex depending on the substance in question, due to factors including accumulation in tissues, protein binding, active metabolites, and receptor interactions.
Examples
Water
The biological half-life of water in a human is about 7 to 14 days. It can be altered by behavior. Drinking large amounts of alcohol will reduce the biological half-life of water in the body. This has been used to decontaminate patients who are internally contaminated with tritiated water. The basis of this decontamination method is to increase the rate at which the water in the body is replaced with new water.
Alcohol
The removal of ethanol (drinking alcohol) through oxidation by alcohol dehydrogenase in the liver from the human body is limited. Hence the removal of a large concentration of alcohol from blood may follow zero-order kinetics. Also the rate-limiting steps for one substance may be in common with other substances. For instance, the blood alcohol concentration can be used to modify the biochemistry of methanol and ethylene glycol. In this way the oxidation of methanol to the toxic formaldehyde and formic acid in the human body can be prevented by giving an appropriate amount of ethanol to a person who has ingested methanol. Methanol is very toxic and causes blindness and death. A person who has ingested ethylene glycol can be treated in the same way. Half life is also relative to the subjective metabolic rate of the individual in question.
Common prescription medications
Metals
The biological half-life of caesium in humans is between one and four months. This can be shortened by feeding the person prussian blue. The prussian blue in the digestive system acts as a solid ion exchanger which absorbs the caesium while releasing potassium ions.
For some substances, it is important to think of the human or animal body as being made up of several parts, each with its own affinity for the substance, and each part with a different biological half-life (physiologically-based pharmacokinetic modelling). Attempts to remove a substance from the whole organism may have the effect of increasing the burden present in one part of the organism. For instance, if a person who is contaminated with lead is given EDTA in a chelation therapy, then while the rate at which lead is lost from the body will be increased, the lead within the body tends to relocate into the brain where it can do the most harm.
Polonium in the body has a biological half-life of about 30 to 50 days.
Caesium in the body has a biological half-life of about one to four months.
Mercury (as methylmercury) in the body has a half-life of about 65 days.
Lead in the blood has a half life of 28–36 days.
Lead in bone has a biological half-life of about ten years.
Cadmium in bone has a biological half-life of about 30 years.
Plutonium in bone has a biological half-life of about 100 years.
Plutonium in the liver has a biological half-life of about 40 years.
Peripheral half-life
Some substances may have different half-lives in different parts of the body. For example, oxytocin has a half-life of typically about three minutes in the blood when given intravenously. Peripherally administered (e.g. intravenous) peptides like oxytocin cross the blood-brain-barrier very poorly, although very small amounts (< 1%) do appear to enter the central nervous system in humans when given via this route. In contrast to peripheral administration, when administered intranasally via a nasal spray, oxytocin reliably crosses the blood–brain barrier and exhibits psychoactive effects in humans. In addition, unlike the case of peripheral administration, intranasal oxytocin has a central duration of at least 2.25 hours and as long as 4 hours. In likely relation to this fact, endogenous oxytocin concentrations in the brain have been found to be as much as 1000-fold higher than peripheral levels.
Rate equations
First-order elimination
Half-times apply to processes where the elimination rate is exponential. If is the concentration of a substance at time , its time dependence is given by
where k is the reaction rate constant. Such a decay rate arises from a first-order reaction where the rate of elimination is proportional to the amount of the substance:
The half-life for this process is
Alternatively, half-life is given by
where λz is the slope of the terminal phase of the time–concentration curve for the substance on a semilogarithmic scale.
Half-life is determined by clearance (CL) and volume of distribution (VD) and the relationship is described by the following equation:
In clinical practice, this means that it takes 4 to 5 times the half-life for a drug's serum concentration to reach steady state after regular dosing is started, stopped, or the dose changed. So, for example, digoxin has a half-life (or t) of 24–36 h; this means that a change in the dose will take the best part of a week to take full effect. For this reason, drugs with a long half-life (e.g., amiodarone, elimination t of about 58 days) are usually started with a loading dose to achieve their desired clinical effect more quickly.
Biphasic half-life
Many drugs follow a biphasic elimination curve — first a steep slope then a shallow slope:
STEEP (initial) part of curve —> initial distribution of the drug in the body.
SHALLOW part of curve —> ultimate excretion of drug, which is dependent on the release of the drug from tissue compartments into the blood.
The longer half-life is called the terminal half-life and the half-life of the largest component is called the dominant half-life. For a more detailed description see Pharmacokinetics § Multi-compartmental models.
See also
Half-life, pertaining to the general mathematical concept in physics or pharmacology.
Effective half-life
References
Pharmacokinetics
Mathematics in medicine
Temporal exponentials | Biological half-life | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,541 | [
"Pharmacology",
"Physical quantities",
"Time",
"Pharmacokinetics",
"Applied mathematics",
"Temporal exponentials",
"Spacetime",
"Mathematics in medicine"
] |
3,206,138 | https://en.wikipedia.org/wiki/Linear%20alternator | A linear alternator is essentially a linear motor used as an electrical generator.
An alternator is a type of alternating current (AC) electrical generator. The devices are often physically equivalent. The principal difference is in how they are used and which direction the energy flows. An alternator converts mechanical energy to electrical energy, whereas a motor converts electrical energy to mechanical energy. Like many electric motors and electric generators, the linear alternator works by the principle of electromagnetic induction. However, most alternators work with rotary motion, whereas linear alternators work with linear motion (i.e. motion in a straight line).
Theory
A linear alternator is most commonly used to convert back-and-forth motion directly into electrical energy. This eliminates the need for a crank or linkage to convert a reciprocating motion to a rotary motion in order to drive a rotary generator.
Applications
The simplest type of linear alternator is the mechanically powered flashlight (shake type). This is a torch (UK) or flashlight (USA) which contains a coil and a permanent magnet. When the appliance is shaken back and forth, the magnet oscillates through the coil and induces an electric current. This current is used to charge a capacitor, thus storing energy for later use. The appliance can then produce light, typically from a light-emitting diode, until the capacitor is discharged. It can then be re-charged by further shaking.
Other devices that use linear alternators to generate electricity include the free-piston linear generator, an internal combustion engine, and the free-piston Stirling engine, an external combustion engine.
External links
Linear Alternators in Free Piston Engines
Alternators
Electrical generators | Linear alternator | [
"Physics",
"Technology"
] | 354 | [
"Physical systems",
"Electrical generators",
"Machines"
] |
3,206,654 | https://en.wikipedia.org/wiki/Krein%E2%80%93Milman%20theorem | In the mathematical theory of functional analysis, the Krein–Milman theorem is a proposition about compact convex sets in locally convex topological vector spaces (TVSs).
This theorem generalizes to infinite-dimensional spaces and to arbitrary compact convex sets the following basic observation: a convex (i.e. "filled") triangle, including its perimeter and the area "inside of it", is equal to the convex hull of its three vertices, where these vertices are exactly the extreme points of this shape.
This observation also holds for any other convex polygon in the plane
Statement and definitions
Preliminaries and definitions
Throughout, will be a real or complex vector space.
For any elements and in a vector space, the set is called the or closed interval between and The or open interval between and is when while it is when it satisfies and The points and are called the endpoints of these interval. An interval is said to be or proper if its endpoints are distinct.
The intervals and always contain their endpoints while and never contain either of their endpoints.
If and are points in the real line then the above definition of is the same as its usual definition as a closed interval.
For any the point is said to (strictly) and if belongs to the open line segment
If is a subset of and then is called an extreme point of if it does not lie between any two points of That is, if there does exist and such that and In this article, the set of all extreme points of will be denoted by
For example, the vertices of any convex polygon in the plane are the extreme points of that polygon.
The extreme points of the closed unit disk in is the unit circle.
Every open interval and degenerate closed interval in has no extreme points while the extreme points of a non-degenerate closed interval are and
A set is called convex if for any two points contains the line segment The smallest convex set containing is called the convex hull of and it is denoted by
The closed convex hull of a set denoted by is the smallest closed and convex set containing It is also equal to the intersection of all closed convex subsets that contain and to the closure of the convex hull of ; that is,
where the right hand side denotes the closure of while the left hand side is notation.
For example, the convex hull of any set of three distinct points forms either a closed line segment (if they are collinear) or else a solid (that is, "filled") triangle, including its perimeter.
And in the plane the unit circle is convex but the closed unit disk is convex and furthermore, this disk is equal to the convex hull of the circle.
The separable Hilbert space Lp space of square-summable sequences with the usual norm has a compact subset whose convex hull is closed and thus also compact. However, like in all complete Hausdorff locally convex spaces, the convex hull of this compact subset will be compact. But if a Hausdorff locally convex space is not complete then it is in general guaranteed that will be compact whenever is; an example can even be found in a (non-complete) pre-Hilbert vector subspace of Every compact subset is totally bounded (also called "precompact") and the closed convex hull of a totally bounded subset of a Hausdorff locally convex space is guaranteed to be totally bounded.
Statement
In the case where the compact set is also convex, the above theorem has as a corollary the first part of the next theorem, which is also often called the Krein–Milman theorem.
The convex hull of the extreme points of forms a convex subset of so the main burden of the proof is to show that there are enough extreme points so that their convex hull covers all of
For this reason, the following corollary to the above theorem is also often called the Krein–Milman theorem.
To visualized this theorem and its conclusion, consider the particular case where is a convex polygon.
In this case, the corners of the polygon (which are its extreme points) are all that is needed to recover the polygon shape.
The statement of the theorem is false if the polygon is not convex, as then there are many ways of drawing a polygon having given points as corners.
The requirement that the convex set be compact can be weakened to give the following strengthened generalization version of the theorem.
The property above is sometimes called or .
Compactness implies convex compactness because a topological space is compact if and only if every family of closed subsets having the finite intersection property (FIP) has non-empty intersection (that is, its kernel is not empty).
The definition of convex compactness is similar to this characterization of compact spaces in terms of the FIP, except that it only involves those closed subsets that are also convex (rather than all closed subsets).
More general settings
The assumption of local convexity for the ambient space is necessary, because constructed a counter-example for the non-locally convex space where
Linearity is also needed, because the statement fails for weakly compact convex sets in CAT(0) spaces, as proved by . However, proved that the Krein–Milman theorem does hold for compact CAT(0) spaces.
Related results
Under the previous assumptions on if is a subset of and the closed convex hull of is all of then every extreme point of belongs to the closure of
This result is known as (partial) to the Krein–Milman theorem.
The Choquet–Bishop–de Leeuw theorem states that every point in is the barycenter of a probability measure supported on the set of extreme points of
Relation to the axiom of choice
Under the Zermelo–Fraenkel set theory (ZF) axiomatic framework, the axiom of choice (AC) suffices to prove all versions of the Krein–Milman theorem given above, including statement KM and its generalization SKM.
The axiom of choice also implies, but is not equivalent to, the Boolean prime ideal theorem (BPI), which is equivalent to the Banach–Alaoglu theorem.
Conversely, the Krein–Milman theorem KM together with the Boolean prime ideal theorem (BPI) imply the axiom of choice.
In summary, AC holds if and only if both KM and BPI hold.
It follows that under ZF, the axiom of choice is equivalent to the following statement:
The closed unit ball of the continuous dual space of any real normed space has an extreme point.
Furthermore, SKM together with the Hahn–Banach theorem for real vector spaces (HB) are also equivalent to the axiom of choice. It is known that BPI implies HB, but that it is not equivalent to it (said differently, BPI is strictly stronger than HB).
History
The original statement proved by was somewhat less general than the form stated here.
Earlier, proved that if is 3-dimensional then equals the convex hull of the set of its extreme points. This assertion was expanded to the case of any finite dimension by .
The Krein–Milman theorem generalizes this to arbitrary locally convex ; however, to generalize from finite to infinite dimensional spaces, it is necessary to use the closure.
See also
Citations
Bibliography
N. K. Nikol'skij (Ed.). Functional Analysis I. Springer-Verlag, 1992.
H. L. Royden, Real Analysis. Prentice-Hall, Englewood Cliffs, New Jersey, 1988.
Convex hulls
Oriented matroids
Theorems involving convexity
Theorems in convex geometry
Theorems in discrete geometry
Theorems in functional analysis
Topological vector spaces | Krein–Milman theorem | [
"Mathematics"
] | 1,567 | [
"Theorems in mathematical analysis",
"Vector spaces",
"Topological vector spaces",
"Space (mathematics)",
"Theorems in convex geometry",
"Theorems in functional analysis",
"Theorems in discrete mathematics",
"Theorems in geometry",
"Theorems in discrete geometry"
] |
3,206,764 | https://en.wikipedia.org/wiki/Electrolysis%20of%20water | Electrolysis of water is using electricity to split water into oxygen () and hydrogen () gas by electrolysis. Hydrogen gas released in this way can be used as hydrogen fuel, but must be kept apart from the oxygen as the mixture would be extremely explosive. Separately pressurised into convenient 'tanks' or 'gas bottles', hydrogen can be used for oxyhydrogen welding and other applications, as the hydrogen / oxygen flame can reach approximately 2,800°C.
Water electrolysis requires a minimum potential difference of 1.23 volts, although at that voltage external heat is also required. Typically 1.5 volts is required. Electrolysis is rare in industrial applications since hydrogen can be produced less expensively from fossil fuels. Most of the time, hydrogen is made by splitting methane (CH4) into carbon dioxide (CO2) and hydrogen (H2) via steam reforming. This is a carbon-intensive process that means for every kilogram of “grey” hydrogen produced, approximately 10 kilograms of CO2 are emitted into the atmosphere.
History
In 1789, Jan Rudolph Deiman and Adriaan Paets van Troostwijk used an electrostatic machine to make electricity that was discharged on gold electrodes in a Leyden jar. In 1800, Alessandro Volta invented the voltaic pile, while a few weeks later English scientists William Nicholson and Anthony Carlisle used it to electrolyse water. In 1806 Humphry Davy reported the results of extensive distilled water electrolysis experiments, concluding that nitric acid was produced at the anode from dissolved atmospheric nitrogen. He used a high voltage battery and non-reactive electrodes and vessels such as gold electrode cones that doubled as vessels bridged by damp asbestos. Zénobe Gramme invented the Gramme machine in 1869, making electrolysis a cheap method for hydrogen production. A method of industrial synthesis of hydrogen and oxygen through electrolysis was developed by Dmitry Lachinov in 1888.
Principles
A DC electrical power source is connected to two electrodes, or two plates (typically made from an inert metal such as platinum or iridium) that are placed in the water. Hydrogen appears at the cathode (where electrons enter the water), and oxygen at the anode. Assuming ideal faradaic efficiency, the amount of hydrogen generated is twice the amount of oxygen, and both are proportional to the total electrical charge conducted by the solution. However, in many cells competing side reactions occur, resulting in additional products and less than ideal faradaic efficiency.
Electrolysis of pure water requires excess energy in the form of overpotential to overcome various activation barriers. Without the excess energy, electrolysis occurs slowly or not at all. This is in part due to the limited self-ionization of water.
Pure water has an electrical conductivity about one hundred thousandth that of seawater.
Efficiency is increased through the addition of an electrolyte (such as a salt, an acid or a base) and electrocatalysts.
Equations
In pure water at the negatively charged cathode, a reduction reaction takes place, with electrons (e−) from the cathode being given to hydrogen cations to form hydrogen gas. At the positively charged anode, an oxidation reaction occurs, generating oxygen gas and giving electrons to the anode to complete the circuit.
The two half-reactions, reduction and oxidation, are coupled to form a balanced system. In order to balance each half-reaction, the water needs to be acidic or basic. In the presence of acid, the equations are:
In the presence of base, the equations are:
Combining either half reaction pair yields the same overall decomposition of water into oxygen and hydrogen:
The number of hydrogen molecules produced is thus twice the number of oxygen molecules, in keeping with the facts that both hydrogen and oxygen are diatomic molecules and water molecules contain twice as many hydrogen atoms as oxygen atoms. Assuming equal temperature and pressure for both gases, volume is proportional to moles, so twice as large a volume of hydrogen gas is produced as oxygen gas. The number of electrons pushed through the water is twice the number of generated hydrogen molecules and four times the number of generated oxygen molecules.
Thermodynamics
The decomposition of pure water into hydrogen and oxygen at standard temperature and pressure is not favorable in thermodynamic terms.
Thus, the standard potential of the water electrolysis cell (Eocell = Eocathode − Eoanode) is −1.229 V at 25 °C at pH 0 ([H+] = 1.0 M). At 25 °C with pH 7 ([H+] = 1.0 M), the potential is unchanged based on the Nernst equation. The thermodynamic standard cell potential can be obtained from standard-state free energy calculations to find ΔG° and then using the equation: ΔG°= −n F E° (where E° is the cell potential and F the Faraday constant, 96,485 C/mol). For two water molecules electrolysed and hence two hydrogen molecules formed, n = 4, and
ΔG° = 474.48 kJ/2 mol(water) = 237.24 kJ/mol(water)
ΔS° = 163 J/K mol(water)
ΔH° = 571.66 kJ/2 mol(water) = 285.83 kJ/mol(water)
and 141.86 kJ/g(H2).
However, calculations regarding individual electrode equilibrium potentials requires corrections to account for the activity coefficients. In practice when an electrochemical cell is "driven" toward completion by applying reasonable potential, it is kinetically controlled. Therefore, activation energy, ion mobility (diffusion) and concentration, wire resistance, surface hindrance including bubble formation (blocks electrode area), and entropy, require greater potential to overcome. The amount of increase in required potential is termed the overpotential.
Electrolyte
Electrolysis in pure water consumes/reduces H+ cations at the cathode and consumes/oxidizes hydroxide (OH−) anions at the anode. This can be verified by adding a pH indicator to the water: Water near the cathode is basic while water near the anode is acidic. The hydroxides OH− that approach the anode mostly combine with the positive hydronium ions (H3O+) to form water. The positive hydronium ions that approach the cathode mostly combine with negative hydroxide ions to form water. Relatively few hydroniums/hydroxide ions reach the cathode/anode. This can cause overpotential at both electrodes.
Pure water has a charge carrier density similar to semiconductors since it has a low autoionization, Kw = 1.0×10−14 at room temperature and thus pure water conducts current poorly, 0.055 μS/cm. Unless a large potential is applied to increase the autoionization of water, electrolysis of pure water proceeds slowly, limited by the overall conductivity.
An aqueous electrolyte can considerably raise conductivity. The electrolyte disassociates into cations and anions; the anions rush towards the anode and neutralize the buildup of positively charged H+ there; similarly, the cations rush towards the cathode and neutralize the buildup of negatively charged OH− there. This allows the continuous flow of electricity.
Anions from the electrolyte compete with the hydroxide ions to give up an electron. An electrolyte anion with less standard electrode potential than hydroxide will be oxidized instead of the hydroxide, producing no oxygen gas. Likewise, a cation with a greater standard electrode potential than a hydrogen ion will be reduced instead of hydrogen.
Various cations have lower electrode potential than H+ and are therefore suitable for use as electrolyte cations: Li+, Rb+, K+, Cs+, Ba2+, Sr2+, Ca2+, Na+, and Mg2+. Sodium and potassium are common choices, as they form inexpensive, soluble salts.
If an acid is used as the electrolyte, the cation is H+, and no competitor for the H+ is created by disassociating water. The most commonly used anion is sulfate (), as it is difficult to oxidize. The standard potential for oxidation of this ion to the peroxydisulfate ion is +2.010 volts.
Strong acids such as sulfuric acid (H2SO4), and strong bases such as potassium hydroxide (KOH), and sodium hydroxide (NaOH) are common choices as electrolytes due to their strong conducting abilities.
A solid polymer electrolyte can be used such as Nafion and when applied with an appropriate catalyst on each side of the membrane can efficiently electrolyze with as little as 1.5 volts. Several commercial electrolysis systems use solid electrolytes.
Pure water
Electrolyte-free pure water electrolysis has been achieved via deep-sub-Debye-length nanogap electrochemical cells. When the gap between cathode and anode are smaller than Debye-length (1 micron in pure water, around 220 nm in distilled water), the double layer regions from two electrodes can overlap, leading to a uniformly high electric field distributed across the entire gap. Such a high electric field can significantly enhance ion transport (mainly due to migration), further enhancing self-ionization, continuing the reaction and showing little resistance between the two electrodes. In this case, the two half-reactions are coupled and limited by electron-transfer steps (the electrolysis current is saturated at shorter electrode distances).
Seawater
Ambient seawater presents challenges because of the presence of salt and other impurities. Approaches may or may not involve desalination before electrolysis. Traditional electrolysis produces toxic and corrosive chlorine ions (e.g., and ). Multiple methods have been advanced for electrolysing unprocessed seawater. Typical proton exchange membrane (PEM) electrolysers require desalination.
Indirect seawater electrolysis involves two steps: desalting seawater using a pre-treatment device and then producing hydrogen through traditional water electrolysis. This method improves efficiency, reduces corrosion, and extends catalyst lifespan. Some argue that the costs of seawater desalination are relatively small compared to water splitting, suggesting that research should focus on developing more efficient two-step desalination-coupled water splitting processes.
However, indirect seawater electrolysis plants require more space, energy, and more maintenance, and some believe that the water purity achieved through seawater reverse osmosis (SWRO) may not be sufficient, necessitating additional equipment and cost. In contrast, direct seawater electrolysis skips the pre-treatment step and introduces seawater directly into the electrolyzer to produce hydrogen. This approach is seen as more promising due to limited freshwater resources, the need to prioritize basic human needs, and the potential to reduce energy consumption and costs. Membranes are critical for the efficiency of electrolysis, but they can be negatively affected by foreign ions in seawater, shortening their lifespan and reducing the efficiency of the electrolysis process.
One approach involves combining forward osmosis membranes with water splitting to produce hydrogen continuously from impure water sources. Water splitting generates a concentration gradient balanced by water influx via forward osmosis, allowing for continual extraction of pure water. However, this configuration has challenges such as the potential for Cl ions to pass through the membrane and cause damage, as well as the risk of hydrogen and oxygen mixing without a separator.
To address these issues, a low-cost semipermeable membrane was introduced between the electrodes to separate the generated gases, reducing membrane costs and minimizing Cl oxidation. Additionally, research shows that using transition metal-based materials can support water electrolysis efficiently. Some studies have explored the use of low-cost reverse osmosis membranes (<10$/m2) to replace expensive ion exchange membranes (500-1000$/m2). The use of reverse osmosis membranes becomes economically attractive in water electrolyzer systems as opposed to ion exchange membranes due to their cost-effectiveness and the high proton selectivity they offer for cation salts, especially when high-concentration electrolytes are employed.
An alternative method involves employing a hydrophobic membrane to prevent ions from entering the cell stack. This method combines a hydrophobic porous polytetrafluoroethylene (PTFE) waterproof breathable membrane with a self-dampening electrolyte, utilizing a hygroscopic sulfuric acid solution with a commercial alkaline electrolyzer to generate hydrogen gas from seawater. At a larger scale, this seawater electrolysis system can consistently produce 386 L of H2 per hour for over 3200 hours without experiencing significant catalyst corrosion or membrane wetting. The process capitalizes on the disparity in water vapor pressure between seawater and the self-dampening electrolyte to drive seawater evaporation and water vapor diffusion, followed by the liquefaction of the adsorbed water vapor on the self-dampening electrolyte.
Techniques
As of 2022, commercial electrolysis requires around 53 kWh of electricity to produce one kg of hydrogen, which holds 39.4 kWh (HHV) of energy.
Fundamental demonstration
Two leads, running from the terminals of a battery, placed in a cup of water with a quantity of electrolyte establish conductivity. Using NaCl (salt) in an electrolyte solution yields chlorine gas rather than oxygen due to a competing half-reaction. Sodium bicarbonate (baking soda) instead yields hydrogen, and carbon dioxide for as long as the bicarbonate anion stays in solution.
Hofmann voltameter
The Hofmann voltameter is a small-scale electrolytic cell. It consists of three joined upright cylinders. The inner cylinder is open at the top to allow the addition of water and electrolyte. A platinum electrode (plate or honeycomb) is placed at the bottom of each of the two side cylinders, connected to the terminals of an electricity source. The generated gases displace water and collect at the top of the two outer tubes, where it can be drawn off with a stopcock.
High-pressure
High-pressure electrolysis involves compressed hydrogen output around 12–20 MPa (120–200 Bar, 1740–2900 psi). By pressurising the hydrogen in the electrolyser, the need for an external hydrogen compressor is eliminated. The average energy consumption is around 3%.
High-temperature
High-temperature electrolysis (also HTE or steam electrolysis) is more efficient at higher temperatures. A heat engine supplies some of the energy, which is typically cheaper than electricity
Alkaline water electrolysis
Proton exchange membrane
A proton-exchange membrane electrolyser separates reactants and transports protons while blocking a direct electronic pathway through the membrane. PEM fuel cells use a solid polymer membrane (a thin plastic film) which is permeable to hydrogen ions (protons) when it is saturated with water, but does not conduct electrons.
It uses a proton-exchange membrane, or polymer-electrolyte membrane (PEM), which is a semipermeable membrane generally made from ionomers and designed to conduct protons while acting as an insulator and reactant barrier, e.g. to oxygen and hydrogen gas. PEM fuel cells use a solid polymer membrane (a thin plastic film) that is permeable to protons when saturated with water, but does not conduct electrons. Proton-exchange membranes are primarily characterized by proton conductivity (σ), methanol permeability (P), and thermal stability.
PEMs can be made from either pure polymer or from composite membranes, where other materials are embedded in a polymer matrix. One of the most common commercially available materials is the fluoropolymer (PFSA) Nafion. Nafion is an ionomer with a perfluorinated backbone such as Teflon. Many other structural motifs are used to make ionomers for proton-exchange membranes. Many use polyaromatic polymers, while others use partially fluorinated polymers.
Anion exchange membrane
Anion exchange membrane electrolysis employs an anion-exchange membrane (AEM) to achieve the separation of products, provide electrical insulation between electrodes, and facilitate ion conduction. In contrast to PEM electrolysis, AEM electrolysis allows for the conduction of hydroxide ions. A noteworthy benefit of AEM-based water electrolysis is the elimination of the need for expensive noble metal catalysts, as cost-effective transition metal catalysts can be utilized in their place.
Supercritical water
Supercritical water electrolysis (SWE) uses water in a supercritical state. Supercritical water requires less energy, therefore reducing costs. It operates at >375 °C, which reduces thermodynamic barriers and increases kinetics, improving ionic conductivity over liquid or gaseous water, which reduces ohmic losses. Benefits include improved electrical efficiency, >221bar pressurised delivery of product gases, ability to operate at high current densities and low dependence on precious metal catalysts. As of 2021 commercial SWE equipment was not available.
Nickel/iron
In 2014, researchers announced electrolysis using nickel and iron catalysts rather than precious metals. Nickel-metal/nickel-oxide structure is more active than nickel metal or nickel oxide alone. The catalyst significantly lowers the required voltage. Nickel–iron batteries are under investigation for use as combined batteries and electrolysers. Those "battolysers" could be charged and discharged like conventional batteries, and would produce hydrogen when fully charged.
In 2023, researchers in Australia announced the use of a porous sheet of nitrogen-doped nickel molybdenum phosphide catalyst. The nitrogen doping increases conductivity and optimizes electronic density and surface chemistry. This produces additional catalytic sites. The nitrogen bonds to the surface metals and has electro-negative properties that help exclude unwanted ions and molecules, while phosphate, sulfate, nitrate and hydroxyl surface ions block chlorine and prevent corrosion. 10 mA/cm2 can be achieved using 1.52 and 1.55 V in alkaline electrolyte and seawater, respectively.
Nanogap electrochemical cells
In 2017, researchers reported nanogap electrochemical cells that achieved high-efficiency electrolyte-free pure water electrolysis at ambient temperature. In these cells, the two electrodes are so close to each other (smaller than Debye-length) that the mass transport rate can be higher than the electron-transfer rate, leading to two half-reactions coupled together and limited by the electron-transfer step. Experiments show that the electrical current density can be larger than that from 1 mol/L sodium hydroxide solution. Its "Virtual Breakdown Mechanism", is completely different from traditional electrochemical theory, due to such nanogap size effects.
Capillary fed
A capillary-fed electrolyzer cell is claimed to require only 41.5 kWh to produce 1 kg of hydrogen. The water electrolyte is isolated from the electrodes by a porous, hydrophilic separator. The water is drawn into the electrolyzer by capillary action, while the electrolyzed gases pass out on either side. It extends PEM technology by eliminating bubbles that reduce the contact between the electrodes and the electrolyte, reducing efficiency. The design is claimed to operate at 98% energy efficiency (higher heating value of hydrogen). The design forgoes water circulation, separator tanks, and other mechanism and can be air- or radiatively cooled. The effect of the build-up of impurities in the cell from those initially present in the feed water is not yet available.
Applications
About five percent of hydrogen gas produced worldwide is created by electrolysis. The vast majority of current industrial hydrogen production is from natural gas in the steam reforming process, or from the partial oxidation of coal or heavy hydrocarbons. The majority of the hydrogen produced through electrolysis is a side product in the production of chlorine and caustic soda. This is a prime example of a competing for side reaction.
In the chloralkali process (electrolysis of brine) a water/sodium chloride mixture is only half the electrolysis of water since the chloride ions are oxidized to chlorine rather than water being oxidized to oxygen. Thermodynamically, this would not be expected since the oxidation potential of the chloride ion is less than that of water, but the rate of the chloride reaction is much greater than that of water, causing it to predominate. The hydrogen produced from this process is either burned (converting it back to water), used for the production of specialty chemicals, or various other small-scale applications.
Water electrolysis is also used to generate oxygen for the International Space Station.
Many industrial electrolysis cells are similar to Hofmann voltameters, with platinum plates or honeycombs as electrodes. Generally, hydrogen is produced for point of use applications such as oxyhydrogen torches or when high purity hydrogen or oxygen is desired. The vast majority of hydrogen is produced from hydrocarbons and as a result, contains trace amounts of carbon monoxide among other impurities. The carbon monoxide impurity can be detrimental to various systems including many fuel cells.
As electrolysers can be ramped down they might in future be used to cope with electricity supply demand mismatch.
Efficiency
Industrial output
Efficiency of modern hydrogen generators is measured by energy consumed per standard volume of hydrogen (MJ/m3), assuming standard temperature and pressure of the H2. The lower the energy used by a generator, the higher its efficiency would be; a 100%-efficient electrolyser would consume (higher heating value) of hydrogen, . Practical electrolysis (using a rotating electrolyser at 15 bar pressure) may consume , and a further if the hydrogen is compressed for use in hydrogen cars. By adding external heat at , electricity consumption may be reduced.
There are three main technologies available on the market: alkaline, proton exchange membrane (PEM), and solid oxide electrolyzers.
Alkaline electrolyzers are cheaper in terms of investment (they generally use nickel catalysts), but least efficient. PEM electrolyzers are more expensive (they generally use expensive platinum-group metal catalysts) but are more efficient and can operate at higher current densities, and can, therefore, be possibly cheaper if the hydrogen production is large enough. Solid oxide electrolyzer cells (SOEC) are the third most common type of electrolysis, and the most expensive, and use high operating temperatures to increase efficiency. The theoretical electrical efficiency of SOEC is close to 100% at 90% hydrogen production. Degradation of the system over time does not affect the efficiency of SOEC electrolyzers initially unlike PEM and alkaline electrolyzers. As the SOEC system degrades, the cell voltage increases, producing more heat in the system naturally. Due to this, less energy is required to keep the system hot, which will make up for the energy losses from dramatic degradation initially. SOEC requires replacement of the stack after some years of degradation.
Efficiency
Electrolyzer vendors provide efficiencies based on enthalpy. To assess the claimed efficiency of an electrolyzer it is important to establish how it was defined by the vendor (i.e. what enthalpy value, what current density, etc.).
Conventional alkaline electrolysis has an efficiency of about 70%. Accounting for the accepted use of the higher heating value (because inefficiency via heat can be redirected back into the system to create the steam required by the catalyst), average working efficiencies for PEM electrolysis are around 80%. This is expected to increase to between 82 and 86% before 2030. Theoretical efficiency for PEM electrolysers are predicted up to 94%.
In 2024, Australian company Hysata announced a device capable of 95% efficiency relative to the higher heating value of hydrogen. Conventional systems consume 52.5 kWh to produce hydrogen that can store 39.4 kWh of energy (1 kg). Its technology requires only 41.5 kWh to produce 1 kg. It uses a capillary-fed electrolyzer to eliminate hydrogen and oxygen bubbles in the fluid electrolyte. Bubbles are non-conductive, and can stick to electrodes, reducing electrode exposure to the electrolyte, increasing resistance. Hysata places the electrolyte at the bottom of the device. Capillary action draws it through a porous, hydrophilic separator between the electrodes. Each electrode has complete contact with the electrolyte on the inner side, and a dry chamber on the outer side. The effect of the build-up of impurities in the cell from those initially present in the feed water is not yet available.
Cost
Calculating cost is complicated, and a market price barely exists. Considering the industrial production of hydrogen, and using current best processes for water electrolysis (PEM or alkaline electrolysis) which have an effective electrical efficiency of 70–80%, producing 1 kg of hydrogen (which has a specific energy of 143 MJ/kg) requires of electricity. At an electricity cost of $0.06/kW·h, as set out in the US Department of Energy hydrogen production targets for 2015, the hydrogen cost is $3/kg. Equipment cost depends on mass production. Operating cost depends on electricity cost for about half of the levelised product price.
Comparison with steam-methane-reformed (SMR) hydrogen
With the range of natural gas prices from 2016 as shown in the graph (Hydrogen Production Tech Team Roadmap, November 2017) putting the cost of steam-methane-reformed (SMR) hydrogen at between $1.20 and $1.50, the cost price of hydrogen via electrolysis is still over double 2015 DOE hydrogen target prices. The US DOE target price for hydrogen in 2020 is $2.30/kg, requiring an electricity cost of $0.037/kW·h, which is achievable given 2018 PPA tenders for wind and solar in many regions. This puts the $4/gasoline gallon equivalent (gge) H2 dispensed objective well within reach, and close to a slightly elevated natural gas production cost for SMR.
In other parts of the world, the price of SMR hydrogen is between $1–3/kg on average. This makes production of hydrogen via electrolysis cost competitive in many regions already, as outlined by Nel Hydrogen and others, including an article by the IEA examining the conditions which could lead to a competitive advantage for electrolysis. The large price increase of gas during the 2021–2022 global energy crisis made hydrogen electrolysis economic in some parts of the world.
Facilities
Some large industrial electrolyzers are operating at several megawatts. , the largest is a 150 MW alkaline facility in Ningxia, China, with a capacity up to 23,000 tonnes per year. While higher-efficiency Western electrolysis equipment can cost $1,200/kW, lower-efficiency Chinese equipment can cost $300/kW, but with a lower lifetime of 60,000 hours.
, different analysts predict annual manufacture of equipment by 2030 as 47 GW, 104 GW and 180 GW, respectively.
Overpotential
Real water electrolyzers require higher voltages for the reaction to proceed. The part that exceeds 1.23 V is called overpotential or overvoltage, and represents any kind of loss and nonideality in the electrochemical process.
For a well designed cell the largest overpotential is the reaction overpotential for the four-electron oxidation of water to oxygen at the anode; electrocatalysts can facilitate this reaction, and platinum alloys are the state of the art for this oxidation. Developing a cheap, effective electrocatalyst for this reaction would be a great advance, and is a topic of current research; there are many approaches, among them a 30-year-old recipe for molybdenum sulfide, graphene quantum dots, carbon nanotubes, perovskite, and nickel/nickel-oxide. Trimolybdenum phosphide () has been recently found as a promising nonprecious metal and earth‐abundant candidate with outstanding catalytic properties that can be used for electrocatalytic processes. The catalytic performance of Mo3P nanoparticles is tested in the hydrogen evolution reaction (HER), indicating an onset potential of as low as 21 mV, H2 formation rate, and exchange current density of 214.7 μmol/(s·g) cat (at only 100 mV overpotential) and 279.07 μA/cm2, respectively, which are among the closest values yet observed to platinum. The simpler two-electron reaction to produce hydrogen at the cathode can be electrocatalyzed with almost no overpotential by platinum, or in theory a hydrogenase enzyme. If other, less effective, materials are used for the cathode (e.g. graphite), large overpotentials will appear.
Thermodynamics
The electrolysis of water in standard conditions requires a theoretical minimum of 237 kJ of electrical energy input to dissociate each mole of water, which is the standard Gibbs free energy of formation of water. It also requires thermal energy to balance the change in entropy of the reaction. Therefore, the process cannot proceed at constant temperature at electrical energy inputs below 286 kJ per mol if no external thermal energy is added.
Since each mole of water requires two moles of electrons, and given that the Faraday constant F represents the charge of a mole of electrons (96485 C/mol), it follows that the minimum voltage necessary for electrolysis is about 1.23 V. If electrolysis is carried out at high temperature, this voltage reduces. This effectively allows the electrolyser to operate at more than 100% electrical efficiency. In electrochemical systems this means that heat must be supplied to the reactor to sustain the reaction. In this way thermal energy can be used for part of the electrolysis energy requirement. In a similar way the required voltage can be reduced (below 1 V) if fuels (such as carbon, alcohol, biomass) are reacted with water (PEM based electrolyzer in low temperature) or oxygen ions (solid oxide electrolyte based electrolyzer in high temperature). This results in some of the fuel's energy being used to "assist" the electrolysis process and can reduce the overall cost of hydrogen produced.
However, observing the entropy component (and other losses), voltages over 1.48 V are required for the reaction to proceed at practical current densities (the thermoneutral voltage).
In the case of water electrolysis, Gibbs free energy represents the minimum work necessary for the reaction to proceed, and the reaction enthalpy is the amount of energy (both work and heat) that has to be provided so the reaction products are at the same temperature as the reactant (i.e. standard temperature for the values given above). Potentially, an electrolyzer operating at 1.48 V would operate isothermally at a temperature of 25°C as the electrical energy supplied would be equal to the enthalpy (heat) of water decomposition and this would require 20% more electrical energy than the minimum.
See also
Electrocatalyst
Electrochemistry
Electrochemical cell
Electrochemical engineering
Electrolysis
Gas cracker
Hydrogen production
Methane pyrolysis (for Hydrogen)
Noryl
Photoelectrolysis of water
Photocatalytic water splitting
Electrochemical reduction of carbon dioxide
Timeline of hydrogen technologies
Water purification
References
External links
EERE 2008 – 100 kgH2/day Trade Study
NREL 2006 – Electrolysis technical report
6.Modeling and Integration of Green-Hydrogen-Assisted Carbon Dioxide Utilization for Hydrocarbon Manufacturing
Water, electrolysis of
Hydrogen production
Industrial gases
Water chemistry | Electrolysis of water | [
"Chemistry"
] | 6,666 | [
"Electrochemistry",
"Industrial gases",
"nan",
"Electrolysis",
"Chemical process engineering"
] |
3,208,098 | https://en.wikipedia.org/wiki/ACS%20Combinatorial%20Science | ACS Combinatorial Science (usually abbreviated as ACS Comb. Sci.), formerly Journal of Combinatorial Chemistry (1999-2010), was a peer-reviewed scientific journal, published since 1999 by the American Chemical Society. ACS Combinatorial Science publishes articles, reviews, perspectives, accounts and reports in the field of Combinatorial Chemistry.
Anthony Czarnik served as the founding editor from 1999 to 2010. M.G. Finn served as Editor from 2010 to 2020. In 2010, ACS agreed to change the name of the journal to "Combinatorial Science" and it was the first and only ACS journal to be devoted to a way of doing science, rather than to a specific field of knowledge or application.
The journal stopped accepting new submissions in August and the last issue was published in December 2020.
Abstracting and indexing
JCS is currently indexed in:
Chemical Abstracts Service (CAS)
SCOPUS
EBSCOhost
PubMed
Web of Science
References
Combinatorial Science
Academic journals established in 1999
Monthly journals
English-language journals
Combinatorial chemistry
1999 establishments in the United States | ACS Combinatorial Science | [
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 227 | [
"Combinatorial chemistry",
"Materials science",
"Combinatorics"
] |
3,208,126 | https://en.wikipedia.org/wiki/Organic%20Letters | Organic Letters is a biweekly peer-reviewed scientific journal covering research in organic chemistry. It was established in 1999 and is published by the American Chemical Society. In 2014, the journal moved to a hybrid open access publishing model. The founding editor-in-chief was Amos Smith. The current editor-in-chief is Marisa C. Kozlowski. The journal is abstracted and indexed in: the Science Citation Index Expanded, Scopus, Academic Search Premier, BIOSIS Previews, Chemical Abstracts Service, EMBASE, and MEDLINE.
References
External links
American Chemical Society academic journals
Biweekly journals
Organic chemistry journals
Academic journals established in 1999
English-language journals | Organic Letters | [
"Chemistry"
] | 138 | [
"Organic chemistry journals"
] |
3,208,346 | https://en.wikipedia.org/wiki/Ponceau%20S | Ponceau S, Acid Red 112, or C.I. 27195 (systematic name: 3-hydroxy-4-(2-sulfo-4-[4-sulfophenylazo]phenylazo)-2,7-naphthalenedisulfonic acid sodium salt) is a sodium salt of a diazo dye of a light red color, that may be used to prepare a stain for rapid reversible detection of protein bands on nitrocellulose or polyvinylidene fluoride (PVDF) membranes (western blotting), as well as on cellulose acetate membranes. A Ponceau S stain is useful because it does not appear to have a deleterious effect on the sequencing of blotted polypeptides and is therefore one method of choice for locating polypeptides on western blots for blot-sequencing. It is also easily reversed with water washes, facilitating subsequent immunological detection. The stain can be completely removed from the protein bands by continued washing. Common stain formulations include 0.1% (w/v) Ponceau S in 5% acetic acid or 2% (w/v) Ponceau S in 30% trichloroacetic acid and 30% sulfosalicylic acid.
See also
Coomassie brilliant blue
Western blot normalization
References
Protein methods
Azo dyes
Acid dyes | Ponceau S | [
"Chemistry",
"Biology"
] | 302 | [
"Biochemistry methods",
"Protein methods",
"Protein biochemistry",
"Organic chemistry stubs"
] |
3,208,464 | https://en.wikipedia.org/wiki/MammaPrint | MammaPrint is a prognostic and predictive diagnostic test for early stage breast cancer patients that assess the risk that a tumor will metastasize to other parts of the body. It gives a binary result, high-risk or low-risk classification, and helps physicians determine whether or not a patient will benefit from chemotherapy. Women with a low risk result can safely forego chemotherapy without decreasing likelihood of disease free survival. MammaPrint is part of the personalized medicine portfolio marketed by Agendia.
MammaPrint is based on the Amsterdam 70-gene breast cancer gene signature and uses formalin-fixed-paraffin-embedded (FFPE) or fresh tissue for microarray analysis. It is a laboratory developed test (LDT) which falls into the class of In Vitro Diagnostic Multivariate Index Assays (IVDMIA). MammaPrint was the first (2007) IVDMIA to be cleared by the Food and Drug Administration (FDA) in a De Novo Classification Process (Evaluation of Automatic Class III Designation) and is the only molecular diagnostic test with a randomized prospective clinical trial validating clinical utility. The test uses RNA isolated from tumor samples and run on custom glass microarray slides in order to determine the expression of a 70-gene signature. The expression profile is then used in a proprietary algorithm to categorically classify the patient as being at either high or low risk of breast cancer recurrence.
MammaPrint has been prospectively, clinically validated for use in early stage (I and II) breast cancer patients regardless of estrogen receptor (ER) or Human Epidermal Growth Factor Receptor 2 (HER2) status, with a tumor size ≤ 5.0 cm, and 0-3 positive lymph nodes (LN0-1), with no special specifications for N1mi pathology. This differentiates MammaPrint from other multi-gene assays in use today that have only shown predictive value in ER positive, HER2 negative, lymph node (LN) negative patients. MammaPrint is also indicated for patients with ER negative tumors (15% of tumors). There are no exclusion criteria based on histopathologic tumor type (i.e. ductal, lobular, mixed, etc.) or age. MammaPrint is predictive for pre- and post-menopausal women.
Development
The Human Genome Project identified approximately 25,000 genes in the human genome and created the possibility for personalized medicine. The Netherlands Cancer Institute (NKI) in Amsterdam utilized this information and applied it specifically to breast cancer, creating the Amsterdam 70-gene signature (70-GS). MammaPrint is the commercialized assay that measures the 70-GS.
The NKI hypothesized that breast cancer is a genetic, heterogeneous disease, where gene expression would be different in aggressive breast tumors that develop recurrences following surgery than from those that are less aggressive and do not recur or spread throughout the body. To identify a novel and independent predictor of breast cancer recurrence, DNA microarray technology was used to interrogate all 25,000 genes in untreated tumor samples from women where follow-up categorized them as being disease free or having distant metastases within five years. Supervised classification identified significantly different expression patterns in 70 genes that were strongly predictive of a short interval to distant metastases.
Implications for utility
The paradigm used to development the 70-GS makes it unique in molecular breast cancer diagnostics because it allowed the tumor biology itself to show the genes most predictive of known patient outcomes. Rather than pre-selecting a few genes based on literature and known information at a given time, supervised learning from the entire expressed genome gives it farsighted utility as the knowledge of cancer biology evolves. Furthermore, development using untreated tumors allows physicians to know their patient's risk of recurrence, without any treatment bias or assumptions, before making a patient's treatment plan.
Clinical Utility
Molecular diagnostics are used in combination with traditional clinicopathologic factors to decide on a treatment plan. MammaPrint provides a binary result, either high risk or low risk. Patients with a low risk result are unlikely to develop distant metastases and are therefore unlikely to benefit from chemotherapy. Since many breast cancers are considered genomically low-risk independent from clinicopathology, a significant number of patients can be saved from overtreatment with chemotherapy.
Guideline Inclusion
MammaPrint is included as standard of care with the highest medical level of evidence in the following guidelines
Dutch Institute CBO Guidelines for treatment of primary breast cancer
St. Gallen's International Oncology Guidelines for the treatment of early stage breast cancer
German Gynecological Oncology Group (AGO) guidelines for breast cancer management
European Group on Tumour Markers (EGTM)
Ordering indications
In February 2007, the U.S. Food and Drug Administration (FDA) cleared the MammaPrint test for use in the U.S. for lymph node negative breast cancer patients of all ages, ER negative or ER positive, with tumors of less than 5 cm. MammaPrint can be considered as a part of standard of care disease management for early stage breast cancer and has significant insurance coverage in the US, including coverage through Medicare and Medicaid. The American Medical Association has granted a Category 1, MAAA Current Procedural Terminology (CPT) code for MammaPrint.
Indications for ordering MammaPrint include:
USA-
Breast Cancer Stage 1 or Stage 2
Invasive carcinoma (infiltrating carcinoma)
Tumor size <5.0 cm
Lymph node negative
Estrogen receptor positive (ER+) or Estrogen receptor negative (ER-)
Women of all ages
Samples from the United States and North America are processed and run in CLIA certified lab in Irvine, CA.
International-
Breast Cancer Stage 1 or Stage 2
Invasive carcinoma (infiltrating carcinoma)
Tumor size <5.0 cm
Lymph node status: negative or positive (up to 3 nodes)
ER+ or ER-
Samples from outside North America are processed and run in Amsterdam, Netherlands.
Pakistan-
Mammaprint is now exclusively available in Pakistan through Precision Diagnostic Laboratory
Tissue sampling technique
Tumor samples may be submitted as core needle biopsies or surgical specimen. MammaPrint is FDA cleared to accept fresh, frozen, and formalin fixed paraffin embedded (FFPE) specimen types. There are two specimen types that can be submitted:
Formalin-fixed paraffin-embedded tissue block or 10 unstained slides with a 5 micron section on each slide. Quality measures require invasive tumor cellularity of ≥30%.
or
Fresh specimens are currently accepted for research purposes. Samples must be at least 3x3mm (tic-tac size) preserved in RNARetain®. Maximum side dimension should not exceed 5 mm to allow adequate penetration of RNARetain. Invasive tumor cellularity of ≥30% is required.
Cost and Cost-Effectiveness
The cost of the assay in the U.S. is $4,200. In Europe, the test costs EUR 2675.
Several studies show that the use of the MammaPrint is cost-effective for patients in the United States, Europe, Canada and Japan by providing additional information to help doctors tailor treatment to the individual patient.
MammaPrint provides definitive results and does not have an intermediate category, making it more cost-effective than other breast cancer risk assays available.
Key Clinical trials
MammaPrint is the only commercially available breast cancer molecular diagnostic assay to achieve level 1A evidence. Other extensive clinical trials and research collaborations have produced numerous retrospective and prospective validation studies over the past decade which have enabled the successful commercialization of genomic microarray assays, such as the FDA-cleared 70-gene MammaPrint profile. Large, multi-institutional clinical trials, such as MINDACT and ISPY-2, are assessing MammaPrint.
MINDACT
The MINDACT trial provides the highest medical level of evidence, level 1A, for the use of MammaPrint in early stage breast cancer. The MINDACT (Microarray In Node negative and 1-3 positive lymph node Disease may Avoid Chemotherapy) clinical trial is a multi-center, prospective, phase III randomized study comparing the MammaPrint 70-gene expression signature with a common clinical-pathological prognostic tool (Adjuvant! Online) in selecting patients with negative or 1-3 positive nodes for adjuvant chemotherapy in breast cancer.
Publication in the New England Journal of Medicine showed 6,693 breast cancer patients enrolled from 112 participating institutions in 9 European Countries.
In the MINDACT trial, women with breast cancer who are assessed as “High Risk” by both MammaPrint and clinical-pathologic guidelines are advised to have chemotherapy whereas for women with “Low Risk” concordance, hormonal therapy alone is recommended. However, discordant cases are randomized to receive either chemotherapy or hormonal therapy based on clinical-pathological risk assessment or MammaPrint and the patients are followed. The results of MINDACT validate MammaPrint as an important prognostic and predictive tool in cancer treatment.
Primary findings of the MINDACT trial are:
46% of patients identified as high risk for recurrence according to clinical-pathological factors as described in the publication, and who therefore would be usual candidates for adjuvant chemotherapy, were reclassified as Low Risk by MammaPrint and MINDACT shows could safely forgo chemotherapy.
MammaPrint can change clinical practice by providing critical prognostic information to aid in assessing patients’ risk for distant metastasis and potentially sparing over one hundred thousand women annually with early-stage breast cancer worldwide from unnecessary toxicities and side effects from chemotherapy and creating considerable cost savingsr.
As demonstrated in the MINDACT trial, MammaPrint is now the only FDA-cleared breast cancer prognostic test with the highest level of evidence (1A) for its clinical utility to aid correctly identifying Low Risk patients
PROMIS
Prospective Registry Of MammaPrint in breast cancer patients with an Intermediate recurrence Score (PROMIS). This will be a prospective observational, case-only, study of MammaPrint in patients with an Oncotype DX intermediate score (18-30). The clinical data is to be entered online. There will be two Case Report Forms (CRF). The first CRF must be completed before receiving the MammaPrint result. This CRF will capture baseline patient characteristics, pathology information, Oncotype DX score and the recommended treatment plan without knowing the MammaPrint result. The second CRF will be completed within 4 weeks after receiving the MammaPrint result and will capture the recommended treatment based on MammaPrint. It is expected that approximately 20-30 institutions in the US will participate. Around 300 patients will be enrolled in 2 years.
This study has the following objectives:
Describe the frequency of chemotherapy + endocrine versus endocrine alone decisions in Oncotype DX intermediate score patients
Assess the impact of MammaPrint on chemotherapy + endocrine versus endocrine alone treatment decisions
Assess the distribution of MammaPrint Low and High Risk in patients with an intermediate recurrence score
Assess concordance of TargetPrint ER, PR and Her2 results with Oncotype DX ER, PR and Her2 and with locally assessed IHC/FISH ER, PR and Her2
Compare clinical subtype based on IHC/FISH ER, PR, Her2 and Ki-67 (if available) with BluePrint molecular subtype
I-SPY I and I-SPY II
(CALGB 150007/150012 & ACRIN 6657)
Agendia's MammaPrint signature and its microarray technology are integral components of biomarker analysis and molecular prediction in the landmark National Cancer Institute supported I-SPY I and II I-SPY II breast cancer clinical trials which focus on the prediction of therapeutic response in the neoadjuvant setting. The utilization of MammaPrint and Agendia's whole-genome, microarray platform are anticipated to assist in rapid, focused development of oncologic therapies paired with biomarkers.
Key Objectives of I-SPY breast cancer trials for which the MammaPrint whole-genome microarray is utilized:
I-SPY I evaluated biomarkers and imaging for predicting response to standard neoadjuvant chemotherapy
I-SPY II will evaluate Phase 2 drugs in combination with standard chemotherapy in a neoadjuvant setting
I-SPY II will use biomarkers to stratify patients based on their predicted likelihood of response to treatment
MINT
Multi Institutional Neo Adjuvant Therapy Mammaprint Project (MINT). Patients with locally advanced breast cancer (LABC) are often treated with neoadjuvant chemotherapy to shrink the tumor before definitive surgery is performed. This allows oncologists to measure a patient's response to a given chemotherapy regimen in vivo. Achievement of a complete pathologic response (pCR) to neoadjuvant chemotherapy allows for a better prediction of the prospect for a favorable outcome.
Genomics assays that measure specific gene expression patterns in a patient's primary tumor have become important prognostic tools for breast cancer patients. This study is designed to test the ability of MammaPrint® in combination with TargetPrint®, BluePrint®, and TheraPrint®, as well as traditional pathologic and clinical prognostic factors, to predict responsiveness to neo-adjuvant chemotherapy in patients with LABC.
This study has the following objectives:
To determine the predictive power of chemosensitivity of the combination of MammaPrint and BluePrint as measured by pCR.
To compare TargetPrint single gene read out of ER, PR and HER2 with local and centralized IHC and/or CISH/FISH assessment of ER, PR and HER2.
To identify possible correlations between the TheraPrint Research Gene Panel outcomes and chemoresponsiveness.
To identify and/or validate predictive gene expression profiles of clinical response/resistance to chemotherapy.
To compare the three BluePrint molecular subtype categories with IHC-based subtype classification.
NBRST
Prospective neo-adjuvant REGISTRY trial linking MammaPrint, Subtyping and treatment response: Neoadjuvant Breast Registry - Symphony™ Trial (NBRST) (pronounced “in breast”.) This is a prospective observational, case-only, study linking MammaPrint, BluePrint, TargetPrint, TheraPrint and possible additional profiles of interest to treatment response, Recurrence Free Survival (RFS) and Distant Metastases Free Survival (DMFS). Only patients who receive neo-adjuvant therapy can participate. For this project, approximately 20-30 institutions in the US will be invited to contribute clinical patient data from enrolled patients after a MammaPrint, TargetPrint, BluePrint and TheraPrint test has been successfully performed and the patient has started neo-adjuvant therapy. Treatment is at the discretion of the physician, adhering to NCCN approved regimens or a recognized alternative.
The clinical data is to be entered online at 4 time points; amounting to four Case Report Forms (CRFs). Data will be collected on an ongoing basis, the first CRF must be completed within 6 weeks after the MammaPrint, BluePrint, TargetPrint, and TheraPrint result was provided. The second CRF should be completed by 4 weeks after definitive surgery. CRF 3 and CRF4 will be completed 2-3 and 5 years after surgery. It is expected that we will enroll around 500 patients in 4 years.
This registry study has the following objectives:
Measure chemosensitivity (as defined by pCR) or endocrine sensitivity (as defined by decrease in longest tumor diameter or RCB1) in the molecular subgroups as determined by combining MammaPrint and BluePrint results.
Correlate chemosensitivity (as defined by pCR) to TheraPrint Therapy Gene Assay results.
Compare local IHC and FISH results (if available) with TargetPrint results.
Compare the three BluePrint molecular subgroups with IHC-based subtype classification.
Document impact of MammaPrint, TargetPrint and BluePrint result on treatment decision.
Assess the 2-3 and 5 years DMFS and RFS for the different molecular subgroups.
Measure chemosensitivity or endocrine sensitivity correlation with novel expression profiles.
See also
Breast cancer classification
Personalized Medicine
Personal genomics
Cancer genomics (Oncogenomics)
External links
Agendia
KnowYourBreastCancer.com
References
Microarrays | MammaPrint | [
"Chemistry",
"Materials_science",
"Biology"
] | 3,439 | [
"Biochemistry methods",
"Genetics techniques",
"Microtechnology",
"Microarrays",
"Bioinformatics",
"Molecular biology techniques"
] |
3,209,246 | https://en.wikipedia.org/wiki/Abraham%E2%80%93Lorentz%20force | In the physics of electromagnetism, the Abraham–Lorentz force (also known as the Lorentz–Abraham force) is the reaction force on an accelerating charged particle caused by the particle emitting electromagnetic radiation by self-interaction. It is also called the radiation reaction force, the radiation damping force, or the self-force. It is named after the physicists Max Abraham and Hendrik Lorentz.
The formula, although predating the theory of special relativity, was initially calculated for non-relativistic velocity approximations was extended to arbitrary velocities by Max Abraham and was shown to be physically consistent by George Adolphus Schott. The non-relativistic form is called Lorentz self-force while the relativistic version is called the Lorentz–Dirac force or collectively known as Abraham–Lorentz–Dirac force. The equations are in the domain of classical physics, not quantum physics, and therefore may not be valid at distances of roughly the Compton wavelength or below. There are, however, two analogs of the formula that are both fully quantum and relativistic: one is called the "Abraham–Lorentz–Dirac–Langevin equation", the other is the self-force on a moving mirror.
The force is proportional to the square of the object's charge, multiplied by the jerk that it is experiencing. (Jerk is the rate of change of acceleration.) The force points in the direction of the jerk. For example, in a cyclotron, where the jerk points opposite to the velocity, the radiation reaction is directed opposite to the velocity of the particle, providing a braking action. The Abraham–Lorentz force is the source of the radiation resistance of a radio antenna radiating radio waves.
There are pathological solutions of the Abraham–Lorentz–Dirac equation in which a particle accelerates in advance of the application of a force, so-called pre-acceleration solutions. Since this would represent an effect occurring before its cause (retrocausality), some theories have speculated that the equation allows signals to travel backward in time, thus challenging the physical principle of causality. One resolution of this problem was discussed by Arthur D. Yaghjian and was further discussed by Fritz Rohrlich and Rodrigo Medina. Furthermore, some authors argue that a radiation reaction force is unnecessary, introducing a corresponding stress-energy tensor that naturally conserves energy and momentum in Minkowski space and other suitable spacetimes.
Definition and description
The Lorentz self-force derived for non-relativistic velocity approximation , is given in SI units by:
or in Gaussian units by
where is the force, is the derivative of acceleration, or the third derivative of displacement, also called jerk, μ0 is the magnetic constant, ε0 is the electric constant, c is the speed of light in free space, and q is the electric charge of the particle.
Physically, an accelerating charge emits radiation (according to the Larmor formula), which carries momentum away from the charge. Since momentum is conserved, the charge is pushed in the direction opposite the direction of the emitted radiation. In fact the formula above for radiation force can be derived from the Larmor formula, as shown below.
The Abraham–Lorentz force, a generalization of Lorentz self-force for arbitrary velocities is given by:
Where is the Lorentz factor associated with , the velocity of particle. The formula is consistent with special relativity and reduces to Lorentz's self-force expression for low velocity limit.
The covariant form of radiation reaction deduced by Dirac for arbitrary shape of elementary charges is found to be:
History
The first calculation of electromagnetic radiation energy due to current was given by George Francis FitzGerald in 1883, in which radiation resistance appears. However, dipole antenna experiments by Heinrich Hertz made a bigger impact and gathered commentary by Poincaré on the amortissement or damping of the oscillator due to the emission of radiation. Qualitative discussions surrounding damping effects of radiation emitted by accelerating charges was sparked by Henry Poincaré in 1891. In 1892, Hendrik Lorentz derived the self-interaction force of charges for low velocities but did not relate it to radiation losses. Suggestion of a relationship between radiation energy loss and self-force was first made by Max Planck. Planck's concept of the damping force, which did not assume any particular shape for elementary charged particles, was applied by Max Abraham to find the radiation resistance of an antenna in 1898, which remains the most practical application of the phenomenon.
In the early 1900s, Abraham formulated a generalization of the Lorentz self-force to arbitrary velocities, the physical consistency of which was later shown by George Adolphus Schott. Schott was able to derive the Abraham equation and attributed "acceleration energy" to be the source of energy of the electromagnetic radiation. Originally submitted as an essay for the 1908 Adams Prize, he won the competition and had the essay published as a book in 1912. The relationship between self-force and radiation reaction became well-established at this point. Wolfgang Pauli first obtained the covariant form of the radiation reaction and in 1938, Paul Dirac found that the equation of motion of charged particles, without assuming the shape of the particle, contained Abraham's formula within reasonable approximations. The equations derived by Dirac are considered exact within the limits of classical theory.
Background
In classical electrodynamics, problems are typically divided into two classes:
Problems in which the charge and current sources of fields are specified and the fields are calculated, and
The reverse situation, problems in which the fields are specified and the motion of particles are calculated.
In some fields of physics, such as plasma physics and the calculation of transport coefficients (conductivity, diffusivity, etc.), the fields generated by the sources and the motion of the sources are solved self-consistently. In such cases, however, the motion of a selected source is calculated in response to fields generated by all other sources. Rarely is the motion of a particle (source) due to the fields generated by that same particle calculated. The reason for this is twofold:
Neglect of the "self-fields" usually leads to answers that are accurate enough for many applications, and
Inclusion of self-fields leads to problems in physics such as renormalization, some of which are still unsolved, that relate to the very nature of matter and energy.
These conceptual problems created by self-fields are highlighted in a standard graduate text. [Jackson]
The difficulties presented by this problem touch one of the most fundamental aspects of physics, the nature of the elementary particle. Although partial solutions, workable within limited areas, can be given, the basic problem remains unsolved. One might hope that the transition from classical to quantum-mechanical treatments would remove the difficulties. While there is still hope that this may eventually occur, the present quantum-mechanical discussions are beset with even more elaborate troubles than the classical ones. It is one of the triumphs of comparatively recent years (~ 1948–1950) that the concepts of Lorentz covariance and gauge invariance were exploited sufficiently cleverly to circumvent these difficulties in quantum electrodynamics and so allow the calculation of very small radiative effects to extremely high precision, in full agreement with experiment. From a fundamental point of view, however, the difficulties remain.
The Abraham–Lorentz force is the result of the most fundamental calculation of the effect of self-generated fields. It arises from the observation that accelerating charges emit radiation. The Abraham–Lorentz force is the average force that an accelerating charged particle feels in the recoil from the emission of radiation. The introduction of quantum effects leads one to quantum electrodynamics. The self-fields in quantum electrodynamics generate a finite number of infinities in the calculations that can be removed by the process of renormalization. This has led to a theory that is able to make the most accurate predictions that humans have made to date. (See precision tests of QED.) The renormalization process fails, however, when applied to the gravitational force. The infinities in that case are infinite in number, which causes the failure of renormalization. Therefore, general relativity has an unsolved self-field problem. String theory and loop quantum gravity are current attempts to resolve this problem, formally called the problem of radiation reaction or the problem of self-force.
Derivation
The simplest derivation for the self-force is found for periodic motion from the Larmor formula for the power radiated from a point charge that moves with velocity much lower than that of speed of light:
If we assume the motion of a charged particle is periodic, then the average work done on the particle by the Abraham–Lorentz force is the negative of the Larmor power integrated over one period from to :
The above expression can be integrated by parts. If we assume that there is periodic motion, the boundary term in the integral by parts disappears:
Clearly, we can identify the Lorentz self-force equation which is applicable to slow moving particles as:
A more rigorous derivation, which does not require periodic motion, was found using an effective field theory formulation.
A generalized equation for arbitrary velocities was formulated by Max Abraham, which is found to be consistent with special relativity. An alternative derivation, making use of theory of relativity which was well established at that time, was found by Dirac without any assumption of the shape of the charged particle.
Signals from the future
Below is an illustration of how a classical analysis can lead to surprising results. The classical theory can be seen to challenge standard pictures of causality, thus signaling either a breakdown or a need for extension of the theory. In this case the extension is to quantum mechanics and its relativistic counterpart quantum field theory. See the quote from Rohrlich in the introduction concerning "the importance of obeying the validity limits of a physical theory".
For a particle in an external force , we have
where
This equation can be integrated once to obtain
The integral extends from the present to infinitely far in the future. Thus future values of the force affect the acceleration of the particle in the present. The future values are weighted by the factor
which falls off rapidly for times greater than in the future. Therefore, signals from an interval approximately into the future affect the acceleration in the present. For an electron, this time is approximately sec, which is the time it takes for a light wave to travel across the "size" of an electron, the classical electron radius. One way to define this "size" is as follows: it is (up to some constant factor) the distance such that two electrons placed at rest at a distance apart and allowed to fly apart, would have sufficient energy to reach half the speed of light. In other words, it forms the length (or time, or energy) scale where something as light as an electron would be fully relativistic. It is worth noting that this expression does not involve the Planck constant at all, so although it indicates something is wrong at this length scale, it does not directly relate to quantum uncertainty, or to the frequency–energy relation of a photon. Although it is common in quantum mechanics to treat as a "classical limit", some speculate that even the classical theory needs renormalization, no matter how the Planck constant would be fixed.
Abraham–Lorentz–Dirac force
To find the relativistic generalization, Dirac renormalized the mass in the equation of motion with the Abraham–Lorentz force in 1938. This renormalized equation of motion is called the Abraham–Lorentz–Dirac equation of motion.
Definition
The expression derived by Dirac is given in signature (− + + +) by
With Liénard's relativistic generalization of Larmor's formula in the co-moving frame,
one can show this to be a valid force by manipulating the time average equation for power:
Paradoxes
Pre-acceleration
Similar to the non-relativistic case, there are pathological solutions using the Abraham–Lorentz–Dirac equation that anticipate a change in the external force and according to which the particle accelerates in advance of the application of a force, so-called preacceleration solutions. One resolution of this problem was discussed by Yaghjian, and is further discussed by Rohrlich and Medina.
Runaway solutions
Runaway solutions are solutions to ALD equations that suggest the force on objects will increase exponential over time. It is considered as an unphysical solution.
Hyperbolic motion
The ALD equations are known to be zero for constant acceleration or hyperbolic motion in Minkowski spacetime diagram. The subject of whether in such condition electromagnetic radiation exists was matter of debate until Fritz Rohrlich resolved the problem by showing that hyperbolically moving charges do emit radiation. Subsequently, the issue is discussed in context of energy conservation and equivalence principle which is classically resolved by considering "acceleration energy" or Schott energy.
Self-interactions
However the antidamping mechanism resulting from the Abraham–Lorentz force can be compensated by other nonlinear terms, which are frequently disregarded in the expansions of the retarded Liénard–Wiechert potential.
Landau–Lifshitz radiation damping force
The Abraham–Lorentz–Dirac force leads to some pathological solutions. In order to avoid this, Lev Landau and Evgeny Lifshitz came with the following formula for radiation damping force, which is valid when the radiation damping force is small compared with the Lorentz force in some frame of reference (assuming it exists),
so that the equation of motion of the charge in an external field can be written as
Here is the four-velocity of the particle, is the Lorentz factor and is the three-dimensional velocity vector. The three-dimensional Landau–Lifshitz radiation damping force can be written as
where is the total derivative.
Experimental observations
While the Abraham–Lorentz force is largely neglected for many experimental considerations, it gains importance for plasmonic excitations in larger nanoparticles due to large local field enhancements. Radiation damping acts as a limiting factor for the plasmonic excitations in surface-enhanced Raman scattering. The damping force was shown to broaden surface plasmon resonances in gold nanoparticles, nanorods and clusters.
The effects of radiation damping on nuclear magnetic resonance were also observed by Nicolaas Bloembergen and Robert Pound, who reported its dominance over spin–spin and spin–lattice relaxation mechanisms for certain cases.
The Abraham–Lorentz force has been observed in the semiclassical regime in experiments which involve the scattering of a relativistic beam of electrons with a high intensity laser. In the experiments, a supersonic jet of helium gas is intercepted by a high-intensity (1018–1020 W/cm2) laser. The laser ionizes the helium gas and accelerates the electrons via what is known as the “laser-wakefield” effect. A second high-intensity laser beam is then propagated counter to this accelerated electron beam. In a small number of cases, inverse-Compton scattering occurs between the photons and the electron beam, and the spectra of the scattered electrons and photons are measured. The photon spectra are then compared with spectra calculated from Monte Carlo simulations that use either the QED or classical LL equations of motion.
Collective effects
The effects of radiation reaction are often considered within the framework of single-particle dynamics. However, interesting phenomena arise when a collection of charged particles is subjected to strong electromagnetic fields, such as in a plasma. In such scenarios, the collective behavior of the plasma can significantly modify its properties due to radiation reaction effects.
Theoretical studies have shown that in environments with strong magnetic fields, like those found around pulsars and magnetars, radiation reaction cooling can alter the collective dynamics of the plasma. This modification can lead to instabilities within the plasma. Specifically, in the high magnetic fields typical of these astrophysical objects, the momentum distribution of particles is bunched and becomes anisotropic due to radiation reaction forces, potentially driving plasma instabilities and affecting overall plasma behavior. Among these instabilities, the firehose instability can arise due to the anisotropic pressure, and electron cyclotron maser due to population inversion in the rings.
See also
Lorentz force
Cyclotron radiation
Synchrotron radiation
Electromagnetic mass
Radiation resistance
Radiation damping
Wheeler–Feynman absorber theory
Magnetic radiation reaction force
References
Further reading
See sections 11.2.2 and 11.2.3
Donald H. Menzel (1960) Fundamental Formulas of Physics, Dover Publications Inc., , vol. 1, p. 345.
Stephen Parrott (1987) Relativistic Electrodynamics and Differential Geometry, § 4.3 Radiation reaction and the Lorentz–Dirac equation, pages 136–45, and § 5.5 Peculiar solutions of the Lorentz–Dirac equation, pp. 195–204, Springer-Verlag .
External links
MathPages – Does A Uniformly Accelerating Charge Radiate?
Feynman: The Development of the Space-Time View of Quantum Electrodynamics
EC. del Río: Radiation of an accelerated charge
Electrodynamics
Electromagnetic radiation
Radiation
Hendrik Lorentz | Abraham–Lorentz force | [
"Physics",
"Chemistry",
"Mathematics"
] | 3,569 | [
"Transport phenomena",
"Physical phenomena",
"Electromagnetic radiation",
"Waves",
"Radiation",
"Electrodynamics",
"Dynamical systems"
] |
379,507 | https://en.wikipedia.org/wiki/Synchrotron%20light%20source | A synchrotron light source is a source of electromagnetic radiation (EM) usually produced by a storage ring, for scientific and technical purposes. First observed in synchrotrons, synchrotron light is now produced by storage rings and other specialized particle accelerators, typically accelerating electrons. Once the high-energy electron beam has been generated, it is directed into auxiliary components such as bending magnets and insertion devices (undulators or wigglers) in storage rings and free electron lasers.
These supply the strong magnetic fields perpendicular to the beam that are needed to stimulate the high energy electrons to emit photons.
The major applications of synchrotron light are in condensed matter physics, materials science, biology and medicine. A large fraction of experiments using synchrotron light involve probing the structure of matter from the sub-nanometer level of electronic structure to the micrometer and millimeter levels important in medical imaging. An example of a practical industrial application is the manufacturing of microstructures by the LIGA process.
Synchrotron is one of the most expensive kinds of light source known, but it is practically the only viable luminous source of wide-band radiation in far infrared wavelength range for some applications, such as far-infrared absorption spectrometry.
Spectral brightness
The primary figure of merit used to compare different sources of synchrotron radiation has been referred to as the "brightness", the "brilliance", and the "spectral brightness", with the latter term being recommended as the best choice by the Working Group on Synchrotron Nomenclature. Regardless of the name chosen, the term is a measure of the total flux of photons in a given six-dimensional phase space per unit bandwidth (BW).
The spectral brightness is given by
where is the number of photons per second in the beam, and are the root mean square values for the size of the beam in the axes perpendicular to the beam direction, and are the RMS values for the beam solid angle in the x and y dimensions, and is the relative bandwidth, or spread in beam frequency around the central frequency. The customary value for bandwidth is 0.1%.
Spectral brightness has units of time−1⋅distance−2⋅angle−2⋅(% bandwidth)−1.
Properties of sources
Especially when artificially produced, synchrotron radiation is notable for its:
High brilliance, many orders of magnitude more than with X-rays produced in conventional X-ray tubes: 3rd-generation sources typically have a brilliance larger than 1018 photons·s−1·mm−2·mrad−2/(0.1%BW), where 0.1%BW denotes a bandwidth 10−3ω centered around the frequency ω.
High level of polarization (linear, elliptical or circular).
High collimation, i.e. small angular divergence of the beam.
Low emittance, i.e. the product of source cross-section and solid angle of emission is small.
Wide tunability in energy/wavelength by monochromatization (sub-electronvolt up to the megaelectronvolt range).
Pulsed light emission (pulse durations at or below one nanosecond, or a billionth of a second)..
Synchrotron radiation from accelerators
Synchrotron radiation may occur in accelerators either as a nuisance, causing undesired energy loss in particle physics contexts, or as a deliberately produced radiation source for numerous laboratory applications. Electrons are accelerated to high speeds in several stages to achieve a final energy that is typically in the gigaelectronvolt range. The electrons are forced to travel in a closed path by strong magnetic fields. This is similar to a radio antenna, but with the difference that the relativistic speed changes the observed frequency due to the Doppler effect by a factor . Relativistic Lorentz contraction bumps the frequency by another factor of , thus multiplying the gigahertz frequency of the resonant cavity that accelerates the electrons into the X-ray range. Another dramatic effect of relativity is that the radiation pattern is distorted from the isotropic dipole pattern expected from non-relativistic theory into an extremely forward-pointing cone of radiation. This makes synchrotron radiation sources the most brilliant known sources of X-rays. The planar acceleration geometry makes the radiation linearly polarized when observed in the orbital plane, and circularly polarized when observed at a small angle to that plane.
The advantages of using synchrotron radiation for spectroscopy and diffraction have been realized by an ever-growing scientific community, beginning in the 1960s and 1970s. In the beginning, accelerators were built for particle physics, and synchrotron radiation was used in "parasitic mode" when bending magnet radiation had to be extracted by drilling extra holes in the beam pipes. The first storage ring commissioned as a synchrotron light source was Tantalus, at the Synchrotron Radiation Center, first operational in 1968. As accelerator synchrotron radiation became more intense and its applications more promising, devices that enhanced the intensity of synchrotron radiation were built into existing rings. Third-generation synchrotron radiation sources were conceived and optimized from the outset to produce brilliant X-rays. Fourth-generation sources that will include different concepts for producing ultrabrilliant, pulsed time-structured X-rays for extremely demanding and also probably yet-to-be-conceived experiments are under consideration.
Bending electromagnets in accelerators were first used to generate this radiation, but to generate stronger radiation, other specialized devices – insertion devices – are sometimes employed. Current (third-generation) synchrotron radiation sources are typically reliant upon these insertion devices, where straight sections of the storage ring incorporate periodic magnetic structures (comprising many magnets in a pattern of alternating N and S poles – see diagram above) which force the electrons into a sinusoidal or helical path. Thus, instead of a single bend, many tens or hundreds of "wiggles" at precisely calculated positions add up or multiply the total intensity of the beam.
These devices are called wigglers or undulators. The main difference between an undulator and a wiggler is the intensity of their magnetic field and the amplitude of the deviation from the straight line path of the electrons.
There are openings in the storage ring to let the radiation exit and follow a beam line into the experimenters' vacuum chamber. A great number of such beamlines can emerge from modern third-generation synchrotron radiation sources.
Storage rings
The electrons may be extracted from the accelerator proper and stored in an ultrahigh vacuum auxiliary magnetic storage ring where they may circle a large number of times. The magnets in the ring also need to repeatedly recompress the beam against Coulomb (space charge) forces tending to disrupt the electron bunches. The change of direction is a form of acceleration and thus the electrons emit radiation at GeV energies.
Applications of synchrotron radiation
Synchrotron radiation of an electron beam circulating at high energy in a magnetic field leads to radiative self-polarization of electrons in the beam (Sokolov–Ternov effect). This effect is used for producing highly polarised electron beams for use in various experiments.
Synchrotron radiation sets the beam sizes (determined by the beam emittance) in electron storage rings via the effects of radiation damping and quantum excitation.
Beamlines
At a synchrotron facility, electrons are usually accelerated by a synchrotron, and then injected into a storage ring, in which they circulate, producing synchrotron radiation, but without gaining further energy. The radiation is projected at a tangent to the electron storage ring and captured by beamlines. These beamlines may originate at bending magnets, which mark the corners of the storage ring; or insertion devices, which are located in the straight sections of the storage ring. The spectrum and energy of X-rays differ between the two types. The beamline includes X-ray optical devices which control the bandwidth, photon flux, beam dimensions, focus, and collimation of the rays. The optical devices include slits, attenuators, crystal monochromators, and mirrors. The mirrors may be bent into curves or toroidal shapes to focus the beam. A high photon flux in a small area is the most common requirement of a beamline. The design of the beamline will vary with the application. At the end of the beamline is the experimental end station, where samples are placed in the line of the radiation, and detectors are positioned to measure the resulting diffraction, scattering or secondary radiation.
Experimental techniques and usage
Synchrotron light is an ideal tool for many types of research in materials science, physics, and chemistry and is used by researchers from academic, industrial, and government laboratories. Several methods take advantage of the high intensity, tunable wavelength, collimation, and polarization of synchrotron radiation at beamlines which are designed for specific kinds of experiments. The high intensity and penetrating power of synchrotron X-rays enables experiments to be performed inside sample cells designed for specific environments. Samples may be heated, cooled, or exposed to gas, liquid, or high pressure environments. Experiments which use these environments are called in situ and allow the characterization of atomic- to nano-scale phenomena which are inaccessible to most other characterization tools. In operando measurements are designed to mimic the real working conditions of a material as closely as possible.
Diffraction and scattering
X-ray diffraction (XRD) and scattering experiments are performed at synchrotrons for the structural analysis of crystalline and amorphous materials. These measurements may be performed on powders, single crystals, or thin films. The high resolution and intensity of the synchrotron beam enables the measurement of scattering from dilute phases or the analysis of residual stress. Materials can be studied at high pressure using diamond anvil cells to simulate extreme geologic environments or to create exotic forms of matter.
X-ray crystallography of proteins and other macromolecules (PX or MX) are routinely performed. Synchrotron-based crystallography experiments were integral to solving the structure of the ribosome; this work earned the Nobel Prize in Chemistry in 2009.
The size and shape of nanoparticles are characterized using small angle X-ray scattering (SAXS). Nano-sized features on surfaces are measured with a similar technique, grazing-incidence small angle X-ray scattering (GISAXS). In this and other methods, surface sensitivity is achieved by placing the crystal surface at a small angle relative to the incident beam, which achieves total external reflection and minimizes the X-ray penetration into the material.
The atomic- to nano-scale details of surfaces, interfaces, and thin films can be characterized using techniques such as X-ray reflectivity (XRR) and crystal truncation rod (CTR) analysis. X-ray standing wave (XSW) measurements can also be used to measure the position of atoms at or near surfaces; these measurements require high-resolution optics capable of resolving dynamical diffraction phenomena.
Amorphous materials, including liquids and melts, as well as crystalline materials with local disorder, can be examined using X-ray pair distribution function analysis, which requires high energy X-ray scattering data.
By tuning the beam energy through the absorption edge of a particular element of interest, the scattering from atoms of that element will be modified. These so-called resonant anomalous X-ray scattering methods can help to resolve scattering contributions from specific elements in the sample.
Other scattering techniques include energy dispersive X-ray diffraction, resonant inelastic X-ray scattering, and magnetic scattering.
Spectroscopy
X-ray absorption spectroscopy (XAS) is used to study the coordination structure of atoms in materials and molecules. The synchrotron beam energy is tuned through the absorption edge of an element of interest, and modulations in the absorption are measured. Photoelectron transitions cause modulations near the absorption edge, and analysis of these modulations (called the X-ray absorption near-edge structure (XANES) or near-edge X-ray absorption fine structure (NEXAFS)) reveals information about the chemical state and local symmetry of that element. At incident beam energies which are much higher than the absorption edge, photoelectron scattering causes "ringing" modulations called the extended X-ray absorption fine structure (EXAFS). Fourier transformation of the EXAFS regime yields the bond lengths and number of the surrounding the absorbing atom; it is therefore useful for studying liquids and amorphous materials as well as sparse species such as impurities. A related technique, X-ray magnetic circular dichroism (XMCD), uses circularly polarized X-rays to measure the magnetic properties of an element.
X-ray photoelectron spectroscopy (XPS) can be performed at beamlines equipped with a photoelectron analyzer. Traditional XPS is typically limited to probing the top few nanometers of a material under vacuum. However, the high intensity of synchrotron light enables XPS measurements of surfaces at near-ambient pressures of gas. Ambient pressure XPS (AP-XPS) can be used to measure chemical phenomena under simulated catalytic or liquid conditions. Using high-energy photons yields high kinetic energy photoelectrons which have a much longer inelastic mean free path than those generated on a laboratory XPS instrument. The probing depth of synchrotron XPS can therefore be lengthened to several nanometers, allowing the study of buried interfaces. This method is referred to as high-energy X-ray photoemission spectroscopy (HAXPES). Furthermore, the tunable nature of the synchrotron X-ray photon energies presents a wide range of depth sensitivity in the order of 2-50 nm. This allows for probing of samples at greater depths and for non destructive depth-profiling experiments.
Material composition can be quantitatively analyzed using X-ray fluorescence (XRF). XRF detection is also used in several other techniques, such as XAS and XSW, in which it is necessary to measure the change in absorption of a particular element.
Other spectroscopy techniques include angle resolved photoemission spectroscopy (ARPES), soft X-ray emission spectroscopy, and nuclear resonance vibrational spectroscopy, which is related to Mössbauer spectroscopy.
Imaging
Synchrotron X-rays can be used for traditional X-ray imaging, phase-contrast X-ray imaging, and tomography. The Ångström-scale wavelength of X-rays enables imaging well below the diffraction limit of visible light, but practically the smallest resolution so far achieved is about 30 nm. Such nanoprobe sources are used for scanning transmission X-ray microscopy (STXM). Imaging can be combined with spectroscopy such as X-ray fluorescence or X-ray absorption spectroscopy in order to map a sample's chemical composition or oxidation state with sub-micron resolution.
Other imaging techniques include coherent diffraction imaging.
Similar optics can be employed for photolithography for MEMS structures can use a synchrotron beam as part of the LIGA process.
Compact synchrotron light sources
Because of the usefulness of tuneable collimated coherent X-ray radiation, efforts have been made to make smaller more economical sources of the light produced by synchrotrons. The aim is to make such sources available within a research laboratory for cost and convenience reasons; at present, researchers have to travel to a facility to perform experiments. One method of making a compact light source is to use the energy shift from Compton scattering near-visible laser photons from electrons stored at relatively low energies of tens of megaelectronvolts (see for example the Compact Light Source (CLS)). However, a relatively low cross-section of collision can be obtained in this manner, and the repetition rate of the lasers is limited to a few hertz rather than the megahertz repetition rates naturally arising in normal storage ring emission. Another method is to use plasma acceleration to reduce the distance required to accelerate electrons from rest to the energies required for UV or X-ray emission within magnetic devices.
See also
List of synchrotron radiation facilities
List of light sources
References
External links
Elettra Sincrotrone Trieste - Elettra and FERMI lightsources
Imaging ancient insects with synchrotron light source -- BBC
Synchrotron light at IOP
Synchrotron radiation
Synchrotron-related techniques
Particle physics
Light
Light sources
Materials testing
Particle accelerators | Synchrotron light source | [
"Physics",
"Materials_science",
"Engineering"
] | 3,452 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Materials science",
"Waves",
"Materials testing",
"Light",
"Particle physics"
] |
379,619 | https://en.wikipedia.org/wiki/Stone%20duality | In mathematics, there is an ample supply of categorical dualities between certain categories of topological spaces and categories of partially ordered sets. Today, these dualities are usually collected under the label Stone duality, since they form a natural generalization of Stone's representation theorem for Boolean algebras. These concepts are named in honor of Marshall Stone. Stone-type dualities also provide the foundation for pointless topology and are exploited in theoretical computer science for the study of formal semantics.
This article gives pointers to special cases of Stone duality and explains a very general instance thereof in detail.
Overview of Stone-type dualities
Probably the most general duality that is classically referred to as "Stone duality" is the duality between the category Sob of sober spaces with continuous functions and the category SFrm of spatial frames with appropriate frame homomorphisms. The dual category of SFrm is the category of spatial locales denoted by SLoc. The categorical equivalence of Sob and SLoc is the basis for the mathematical area of pointless topology, which is devoted to the study of Loc—the category of all locales, of which SLoc is a full subcategory. The involved constructions are characteristic for this kind of duality, and are detailed below.
Now one can easily obtain a number of other dualities by restricting to certain special classes of sober spaces:
The category CohSp of coherent spaces (and coherent maps) is equivalent to the category CohLoc of coherent (or spectral) locales (and coherent maps), on the assumption of the Boolean prime ideal theorem (in fact, this statement is equivalent to that assumption). The significance of this result stems from the fact that CohLoc in turn is dual to the category DLat01 of bounded distributive lattices. Hence, DLat01 is dual to CohSp—one obtains Stone's representation theorem for distributive lattices.
When restricting further to coherent spaces that are Hausdorff, one obtains the category Stone of so-called Stone spaces. On the side of DLat01, the restriction yields the subcategory Bool of Boolean algebras. Thus one obtains Stone's representation theorem for Boolean algebras.
Stone's representation for distributive lattices can be extended via an equivalence of coherent spaces and Priestley spaces (ordered topological spaces, that are compact and totally order-disconnected). One obtains a representation of distributive lattices via ordered topologies: Priestley's representation theorem for distributive lattices.
Many other Stone-type dualities could be added to these basic dualities.
Duality of sober spaces and spatial locales
The lattice of open sets
The starting point for the theory is the fact that every topological space is characterized by a set of points X and a system Ω(X) of open sets of elements from X, i.e. a subset of the powerset of X. It is known that Ω(X) has certain special properties: it is a complete lattice within which suprema and finite infima are given by set unions and finite set intersections, respectively. Furthermore, it contains both X and the empty set. Since the embedding of Ω(X) into the powerset lattice of X preserves finite infima and arbitrary suprema, Ω(X) inherits the following distributivity law:
for every element (open set) x and every subset S of Ω(X). Hence Ω(X) is not an arbitrary complete lattice but a complete Heyting algebra (also called frame or locale – the various names are primarily used to distinguish several categories that have the same class of objects but different morphisms: frame morphisms, locale morphisms and homomorphisms of complete Heyting algebras). Now an obvious question is: To what extent is a topological space characterized by its locale of open sets?
As already hinted at above, one can go even further. The category Top of topological spaces has as morphisms the continuous functions, where a function f is continuous if the inverse image f −1(O) of any open set in the codomain of f is open in the domain of f. Thus any continuous function f from a space X to a space Y defines an inverse mapping f −1 from Ω(Y) to Ω(X). Furthermore, it is easy to check that f −1 (like any inverse image map) preserves finite intersections and arbitrary unions and therefore is a morphism of frames. If we define Ω(f) = f −1 then Ω becomes a contravariant functor from the category Top to the category Frm of frames and frame morphisms. Using the tools of category theory, the task of finding a characterization of topological spaces in terms of their open set lattices is equivalent to finding a functor from Frm to Top which is adjoint to Ω.
Points of a locale
The goal of this section is to define a functor pt from Frm to Top that in a certain sense "inverts" the operation of Ω by assigning to each locale L a set of points pt(L) (hence the notation pt) with a suitable topology. But how can we recover the set of points just from the locale, though it is not given as a lattice of sets? It is certain that one cannot expect in general that pt can reproduce all of the original elements of a topological space just from its lattice of open sets – for example all sets with the indiscrete topology yield (up to isomorphism) the same locale, such that the information on the specific set is no longer present. However, there is still a reasonable technique for obtaining "points" from a locale, which indeed gives an example of a central construction for Stone-type duality theorems.
Let us first look at the points of a topological space X. One is usually tempted to consider a point of X as an element x of the set X, but there is in fact a more useful description for our current investigation. Any point x gives rise to a continuous function px from the one element topological space 1 (all subsets of which are open) to the space X by defining px(1) = x. Conversely, any function from 1 to X clearly determines one point: the element that it "points" to. Therefore, the set of points of a topological space is equivalently characterized as the set of functions from 1 to X.
When using the functor Ω to pass from Top to Frm, all set-theoretic elements of a space are lost, but – using a fundamental idea of category theory – one can as well work on the function spaces. Indeed, any "point" px: 1 → X in Top is mapped to a morphism Ω(px): Ω(X) → Ω(1). The open set lattice of the one-element topological space Ω(1) is just (isomorphic to) the two-element locale 2 = { 0, 1 } with 0 < 1. After these observations it appears reasonable to define the set of points of a locale L to be the set of frame morphisms from L to 2. Yet, there is no guarantee that every point of the locale Ω(X) is in one-to-one correspondence to a point of the topological space X (consider again the indiscrete topology, for which the open set lattice has only one "point").
Before defining the required topology on pt(X), it is worthwhile to clarify the concept of a point of a locale further. The perspective motivated above suggests to consider a point of a locale L as a frame morphism p from L to 2. But these morphisms are characterized equivalently by the inverse images of the two elements of 2. From the properties of frame morphisms, one can derive that p −1(0) is a lower set (since p is monotone), which contains a greatest element ap = V p −1(0) (since p preserves arbitrary suprema). In addition, the principal ideal p −1(0) is a prime ideal since p preserves finite infima and thus the principal ap is a meet-prime element. Now the set-inverse of p −1(0) given by p −1(1) is a completely prime filter because p −1(0) is a principal prime ideal. It turns out that all of these descriptions uniquely determine the initial frame morphism. We sum up:
A point of a locale L is equivalently described as:
a frame morphism from L to 2
a principal prime ideal of L
a meet-prime element of L
a completely prime filter of L.
All of these descriptions have their place within the theory and it is convenient to switch between them as needed.
The functor pt
Now that a set of points is available for any locale, it remains to equip this set with an appropriate topology in order to define the object part of the functor pt. This is done by defining the open sets of pt(L) as
φ(a) = { p ∈ pt(L) | p(a) = 1 },
for every element a of L. Here we viewed the points of L as morphisms, but one can of course state a similar definition for all of the other equivalent characterizations. It can be shown that setting Ω(pt(L)) = {φ(a) | a ∈ L} does really yield a topological space (pt(L), Ω(pt(L))). It is common to abbreviate this space as pt(L).
Finally pt can be defined on morphisms of Frm rather canonically by defining, for a frame morphism g from L to M, pt(g): pt(M) → pt(L) as pt(g)(p) = p o g. In words, we obtain a morphism from L to 2 (a point of L) by applying the morphism g to get from L to M before applying the morphism p that maps from M to 2. Again, this can be formalized using the other descriptions of points of a locale as well – for example just calculate (p o g) −1(0).
The adjunction of Top and Loc
As noted several times before, pt and Ω usually are not inverses. In general neither is X homeomorphic to pt(Ω(X)) nor is L order-isomorphic to Ω(pt(L)). However, when introducing the topology of pt(L) above, a mapping φ from L to Ω(pt(L)) was applied. This mapping is indeed a frame morphism. Conversely, we can define a continuous function ψ from X to pt(Ω(X)) by setting ψ(x) = Ω(px), where px is just the characteristic function for the point x from 1 to X as described above. Another convenient description is given by viewing points of a locale as meet-prime elements. In this case we have ψ(x) = X \ Cl{x}, where Cl{x} denotes the topological closure of the set {x} and \ is just set-difference.
At this point we already have more than enough data to obtain the desired result: the functors Ω and pt define an adjunction between the categories Top and Loc = Frmop, where pt is right adjoint to Ω and the natural transformations ψ and φop provide the required unit and counit, respectively.
The duality theorem
The above adjunction is not an equivalence of the categories Top and Loc (or, equivalently, a duality of Top and Frm). For this it is necessary that both ψ and φ are isomorphisms in their respective categories.
For a space X, ψ: X → pt(Ω(X)) is a homeomorphism if and only if it is bijective. Using the characterization via meet-prime elements of the open set lattice, one sees that this is the case if and only if every meet-prime open set is of the form X \ Cl{x} for a unique x. Alternatively, every join-prime closed set is the closure of a unique point, where "join-prime" can be replaced by (join-) irreducible since we are in a distributive lattice. Spaces with this property are called sober.
Conversely, for a locale L, φ: L → Ω(pt(L)) is always surjective. It is additionally injective if and only if any two elements a and b of L for which a is not less-or-equal to b can be separated by points of the locale, formally:
if not a ≤ b, then there is a point p in pt(L) such that p(a) = 1 and p(b) = 0.
If this condition is satisfied for all elements of the locale, then the locale is spatial, or said to have enough points. (See also well-pointed category for a similar condition in more general categories.)
Finally, one can verify that for every space X, Ω(X) is spatial and for every locale L, pt(L) is sober. Hence, it follows that the above adjunction of Top and Loc restricts to an equivalence of the full subcategories Sob of sober spaces and SLoc of spatial locales. This main result is completed by the observation that for the functor pt o Ω, sending each space to the points of its open set lattice is left adjoint to the inclusion functor from Sob to Top. For a space X, pt(Ω(X)) is called its soberification. The case of the functor Ω o pt is symmetric but a special name for this operation is not commonly used.
References
Stanley N. Burris and H. P. Sankappanavar, 1981. A Course in Universal Algebra. Springer-Verlag. . (available free online at the website mentioned)
P. T. Johnstone, Stone Spaces, Cambridge Studies in Advanced Mathematics 3, Cambridge University Press, Cambridge, 1982. .
Abstract Stone Duality
Topology
Order theory
Duality theories | Stone duality | [
"Physics",
"Mathematics"
] | 2,988 | [
"Mathematical structures",
"Topology",
"Space",
"Duality theories",
"Geometry",
"Category theory",
"Spacetime",
"Order theory"
] |
379,736 | https://en.wikipedia.org/wiki/TRIAC | A TRIAC (triode for alternating current; also bidirectional triode thyristor or bilateral triode thyristor) is a three-terminal electronic component that conducts current in either direction when triggered. The term TRIAC is a genericised trademark.
TRIACs are a subset of thyristors (analogous to a relay in that a small voltage and current can control a much larger voltage and current) and are related to silicon controlled rectifiers (SCRs). TRIACs differ from SCRs in that they allow current flow in both directions, whereas an SCR can only conduct current in a single direction. Most TRIACs can be triggered by applying either a positive or negative voltage to the gate (an SCR requires a positive voltage). Once triggered, SCRs and TRIACs continue to conduct, even if the gate current ceases, until the main current drops below a certain level called the holding current.
Gate turn-off thyristors (GTOs) are similar to TRIACs but provide more control by turning off when the gate signal ceases.
The bidirectionality of TRIACs makes them convenient switches for alternating-current (AC). In addition, applying a trigger at a controlled phase angle of the AC in the main circuit allows control of the average current flowing into a load (phase control). This is commonly used for controlling the speed of a universal motor, dimming lamps, and controlling electric heaters. TRIACs are Bipolar devices.
Operation
To understand how TRIACs work, consider the triggering in each of the four possible combinations of gate and MT2 voltages with respect to MT1. The four separate cases (quadrants) are illustrated in Figure 1. Main Terminal 1 (MT1) and Main Terminal 2 (MT2) are also referred to as Anode 1 (A1) and Anode 2 (A2) respectively.
The relative sensitivity depends on the physical structure of a particular triac, but as a rule, quadrant I is the most sensitive (least gate current required), and quadrant 4 is the least sensitive (most gate current required).
In quadrants 1 and 2, MT2 is positive, and current flows from MT2 to MT1 through P, N, P and N layers. The N region attached to MT2 does not participate significantly. In quadrants 3 and 4, MT2 is negative, and current flows from MT1 to MT2, also through P, N, P and N layers. The N region attached to MT2 is active, but the N region attached to MT1 only participates in the initial triggering, not the bulk current flow.
In most applications, the gate current comes from MT2, so quadrants 1 and 3 are the only operating modes (both gate and MT2 positive or negative against MT1). Other applications with single polarity triggering from an IC or digital drive circuit operate in quadrants 2 and 3, where MT1 is usually connected to positive voltage (e.g. +5V) and gate is pulled down to 0V (ground).
Quadrant 1
Quadrant 1 operation occurs when the gate and MT2 are positive with respect to MT1.Figure 1
The mechanism is illustrated in Figure 3. The gate current makes an equivalent NPN transistor switch on, which in turn draws current from the base of an equivalent PNP transistor, turning it on also. Part of the gate current (dotted line) is lost through the ohmic path across the p-silicon, flowing directly into MT1 without passing through the NPN transistor base. In this case, the injection of holes in the p-silicon makes the stacked n, p and n layers beneath MT1 behave like a NPN transistor, which turns on due to the presence of a current in its base. This, in turn, causes the p, n and p layers over MT2 to behave like a PNP transistor, which turns on because its n-type base becomes forward-biased with respect to its emitter (MT2). Thus, the triggering scheme is the same as an SCR. The equivalent circuit is depicted in Figure 4.
However, the structure is different from SCRs. In particular, TRIAC always has a small current flowing directly from the gate to MT1 through the p-silicon without passing through the p-n junction between the base and the emitter of the equivalent NPN transistor. This current is indicated in Figure 3 by a dotted red line and is the reason why a TRIAC needs more gate current to turn on than a comparably rated SCR.
Generally, this quadrant is the most sensitive of the four. This is because it is the only quadrant where gate current is injected directly into the base of one of the main device transistors.
Quadrant 2
Quadrant 2 operation occurs when the gate is negative and MT2 is positive with respect to MT1.Figure 1
Figure 5 shows the triggering process. The turn-on of the device is three-fold and starts when the current from MT1 flows into the gate through the p-n junction under the gate. This switches on a structure composed by an NPN transistor and a PNP transistor, which has the gate as cathode (the turn-on of this structure is indicated by "1" in the figure). As current into the gate increases, the potential of the left side of the p-silicon under the gate rises towards MT1, since the difference in potential between the gate and MT2 tends to lower: this establishes a current between the left side and the right side of the p-silicon (indicated by "2" in the figure), which in turn switches on the NPN transistor under the MT1 terminal and as a consequence also the pnp transistor between MT2 and the right side of the upper p-silicon. So, in the end, the structure which is crossed by the major portion of the current is the same as quadrant-I operation ("3" in Figure 5).
Quadrant 3
Quadrant 3 operation occurs when the gate and MT2 are negative with respect to MT1.Figure 1
The whole process is outlined in Figure 6. The process happens in different steps here too. In the first phase, the pn junction between the MT1 terminal and the gate becomes forward-biased (step 1). As forward-biasing implies the injection of minority carriers in the two layers joining the junction, electrons are injected in the p-layer under the gate. Some of these electrons do not recombine and escape to the underlying n-region (step 2). This in turn lowers the potential of the n-region, acting as the base of a pnp transistor which switches on (turning the transistor on without directly lowering the base potential is called remote gate control). The lower p-layer works as the collector of this PNP transistor and has its voltage heightened: this p-layer also acts as the base of an NPN transistor made up by the last three layers just over the MT2 terminal, which, in turn, gets activated. Therefore, the red arrow labeled with a "3" in Figure 6 shows the final conduction path of the current.
Quadrant 4
Quadrant 4 operation occurs when the gate is positive and MT2 is negative with respect to MT1. Figure 1
Triggering in this quadrant is similar to triggering in quadrant III. The process uses a remote gate control and is illustrated in Figure 7. As current flows from the p-layer under the gate into the n-layer under MT1, minority carriers in the form of free electrons are injected into the p-region and some of them are collected by the underlying n-p junction and pass into the adjoining n-region without recombining. As in the case of a triggering in quadrant III, this lowers the potential of the n-layer and turns on the PNP transistor formed by the n-layer and the two p-layers next to it. The lower p-layer works as the collector of this PNP transistor and has its voltage heightened: this p-layer also acts as the base of an NPN transistor made up by the last three layers just over the MT2 terminal, which, in turn, gets activated. Therefore, the red arrow labeled with a "3" in Figure 6 shows the final conduction path of the current.
Generally, this quadrant is the least sensitive of the four. In addition, some models of TRIACs (three-quadrant high commutation triacs named by different suppliers as "logic level", "snubberless" or "Hi-Com" types) cannot be triggered in this quadrant but only in the other three.
Issues
There are some limitations one should know when using a TRIAC in a circuit. In this section, a few are summarized.
Gate threshold current, latching current, and holding current
A TRIAC starts conducting when a current flowing into or out of its gate is sufficient to turn on the relevant junctions in the quadrant of operation. The minimum current able to do this is called gate threshold current and is generally indicated by IGT. In a typical TRIAC, the gate threshold current is generally a few milliamperes, but one has to take into account also that:
IGT depends on the temperature: The higher the temperature, the higher the reverse currents in the blocked junctions. This implies the presence of more free carriers in the gate region, which lowers the gate current needed.
IGT depends on the quadrant of operation, because a different quadrant implies a different way of triggering (see here). As a rule, the first quadrant is the most sensitive (i.e. requires the least current to turn on), whereas the fourth quadrant is the least sensitive.
When turning on from the off state, IGT depends on the voltage across the two main terminals MT1 and MT2. Higher voltage between MT1 and MT2 cause greater reverse currents in the blocked junctions, thus requiring less gate current to trigger the device (similar to high temperature operation). In datasheets IGT is generally given for a specified voltage between MT1 and MT2.
When the gate current is discontinued, if the current between the two main terminals is more than what is called the latching current, the device continues to conduct. Latching current is the minimum current that keeps the device internal structure latched in the absence of gate current. The value of this parameter varies with:
gate current pulse (amplitude, shape and width)
temperature
quadrant of operation
In particular, if the pulse width of the gate current is sufficiently large (generally some tens of microseconds), the TRIAC has completed the triggering process when the gate signal is discontinued and the latching current reaches a minimum level called holding current. Holding current is the minimum required current flowing between the two main terminals that keeps the device on after it has achieved commutation in every part of its internal structure.
In datasheets, the latching current is indicated as IL, while the holding current is indicated as IH. They are typically in the order of some milliamperes.
Static dv/dt
A high between MT2 and MT1 may turn on the TRIAC when it is off. Typical values of critical static dv/dt are in the terms of volts per microsecond.
The turn-on is due to a parasitic capacitive coupling of the gate terminal with the MT2 terminal, which lets currents into the gate in response to a large rate of voltage change at MT2. One way to cope with this limitation is to design a suitable RC or RCL snubber network. In many cases this is sufficient to lower the impedance of the gate towards MT1. By putting a resistor or a small capacitor (or both in parallel) between these two terminals, the capacitive current generated during the transient flows out of the device without activating it. A careful reading of the application notes provided by the manufacturer and testing of the particular device model to design the correct network is in order. Typical values for capacitors and resistors between the gate and MT1 may be up to 100 nF and 10 Ω to 1 kΩ. Normal TRIACs, except for low-power types marketed as sensitive gate, already have such a resistor built in to safeguard against spurious dv/dt triggering. This will mask the gate's supposed diode-type behaviour when testing a TRIAC with a multimeter.
In datasheets, the static dv/dt is usually indicated as and, as mentioned before, is in relation to the tendency of a TRIAC to turn on from the off state after a large voltage rate of rise even without applying any current in the gate.
Critical di/dt
A high rate of rise of the current between MT1 and MT2 (in either direction) when the device is turning on can damage or destroy the TRIAC even if the pulse duration is very short. The reason is that during the commutation, the power dissipation is not uniformly distributed across the device. When switching on, the device starts to conduct current before the conduction finishes to spread across the entire junction. The device typically starts to conduct the current imposed by the external circuitry after some nanoseconds or microseconds but the complete switch on of the whole junction takes a much longer time, so too swift a current rise may cause local hot spots that can permanently damage the TRIAC.
In datasheets, this parameter is usually indicated as and is typically in the order of the tens of ampere per microsecond.
Commutating dv/dt and di/dt
The commutating dv/dt rating applies when a TRIAC has been conducting and attempts to turn off with a partially reactive load, such as an inductor. The current and voltage are out of phase, so when the current decreases below the holding value, the TRIAC attempts to turn off, but because of the phase shift between current and voltage, a sudden voltage step takes place between the two main terminals, which turns the device on again.
In datasheets, this parameter is usually indicated as and is generally in the order of up to some volts per microsecond.
The reason why commutating dv/dt is less than static dv/dt is that, shortly before the device tries to turn off, there is still some excess minority charge in its internal layers as a result of the previous conduction. When the TRIAC starts to turn off, these charges alter the internal potential of the region near the gate and MT1, so it is easier for the capacitive current due to dv/dt to turn on the device again.
Another important factor during a commutation from on-state to off-state is the di/dt of the current from MT1 to MT2. This is similar to the recovery in standard diodes: the higher the di/dt, the greater the reverse current. Because in the TRIAC there are parasitic resistances, a high reverse current in the p-n junctions inside it can provoke a voltage drop between the gate region and the MT1 region which may make the TRIAC stay turned on.
In a datasheet, the commutating di/dt is usually indicated as and is generally in the order of some amperes per microsecond.
The commutating dv/dt is very important when the TRIAC is used to drive a load with a phase shift between current and voltage, such as an inductive load. Suppose one wants to turn the inductor off: when the current goes to zero, if the gate is not fed, the TRIAC attempts to turn off, but this causes a step in the voltage across it due to the aforementioned phase shift. If the commutating dv/dt rating is exceeded, the device will not turn off.
Snubber circuits
When used to control reactive (inductive or capacitive) loads, care must be taken to ensure that the TRIAC turns off correctly at the end of each half-cycle of the AC in the main circuit. TRIACs can be sensitive to fast voltage changes (dv/dt) between MT1 and MT2, so a phase shift between current and voltage caused by reactive loads can lead to a voltage step that can turn the thyristor on erroneously. An electric motor is typically an inductive load and off-line power supplies—as used in most TVs and computers—are capacitive.
Unwanted turn-ons can be avoided by using a snubber circuit (usually of the resistor/capacitor or resistor/capacitor/inductor type) between MT1 and MT2. Snubber circuits are also used to prevent premature triggering, caused for example by voltage spikes in the mains supply.
Because turn-ons are caused by internal capacitive currents flowing into the gate as a consequence of a high dv/dt, (i.e., rapid voltage change) a gate resistor or capacitor (or both in parallel) may be connected between the gate and MT1 to provide a low-impedance path to MT1 and further prevent false triggering. This, however, increases the required trigger current or adds latency due to capacitor charging. On the other hand, a resistor between the gate and MT1 helps draw leakage currents out of the device, thus improving the performance of the TRIAC at high temperature, where the maximum allowed dv/dt is lower. Values of resistors less than 1kΩ and capacitors of 100nF are generally suitable for this purpose, although the fine-tuning should be done on the particular device model.
For higher-powered, more-demanding loads, two SCRs in inverse parallel may be used instead of one TRIAC. Because each SCR will have an entire half-cycle of reverse polarity voltage applied to it, turn-off of the SCRs is assured, no matter what the character of the load. However, due to the separate gates, proper triggering of the SCRs is more complex than triggering a TRIAC.
TRIACs may also fail to turn on reliably with reactive loads if the current phase shift causes the main circuit current to be below the holding current at trigger time. To overcome the problem DC or a pulse train may be used to repeatedly trigger the TRIAC until it turns on.
Application
Low-power TRIACs are used in many applications such as light dimmers, speed controls for electric fans and other electric motors, and in the modern computerized control circuits of many household small and major appliances.
When mains voltage TRIACs are triggered by microcontrollers, optoisolators are frequently used; for example optotriacs can be used to control the gate current. Alternatively, where safety allows and electrical isolation of the controller isn't necessary, one of the microcontroller's power rails may be connected to one of the mains supply. In these situations it is normal to connect the neutral terminal to the positive rail of the microcontroller's power supply, together with A1 of the triac, with A2 connected to the live. The TRIAC's gate can be connected through an opto-isolated transistor, and sometimes a resistor to the microcontroller, so that bringing the voltage down to the microcontroller's logic zero pulls enough current through the TRIAC's gate to trigger it. This ensures that the TRIAC is triggered in quadrants II and III and avoids quadrant IV where TRIACs are typically insensitive.
Example data
High commutation (two- and three-quadrant) TRIACs
Three-quadrant TRIACs only operate in quadrants 1 through 3 and cannot be triggered in quadrant 4. These devices are made specifically for improved commutation and can often control reactive loads without the use of a snubber circuit.
The first TRIACs of this type were marketed by Thomson Semiconductors (now ST Microelectronics) under the name "Alternistor". Later versions are sold under the trademark "Snubberless" and "ACS" (AC Switch, though this type also incorporates a gate buffer, which further precludes Quadrant I operation). Littelfuse also uses the name "Alternistor". Philips Semiconductors (now NXP Semiconductors) originated the trademark "Hi-Com" (High Commutation).
Often these TRIACs can operate with smaller gate-current to be directly driven by logic level components.
See also
DIAC (diode for alternating current)
Quadrac
Silicon controlled rectifier (SCR)
Triode
References
Further reading
Thyristor Theory and Design Considerations; ON Semiconductor; 240 pages; 2006; HBD855/D. (Free PDF download)
External links
Solid state switches
Power electronics | TRIAC | [
"Engineering"
] | 4,333 | [
"Electronic engineering",
"Power electronics"
] |
380,117 | https://en.wikipedia.org/wiki/Belfry | The belfry is a structure enclosing bells for ringing as part of a building, usually as part of a bell tower or steeple. It can also refer to the entire tower or building, particularly in continental Europe for such a tower attached to a city hall or other civic building.
A belfry encloses the bell chamber, the room in which the bells are housed; its walls are pierced by openings which allow the sound to escape. The openings may be left uncovered but are commonly filled with louvers to prevent rain and snow from entering and damaging the bells. There may be a separate room below the bell chamber to house the ringers.
Etymology
The word belfry comes from the Old North French or , meaning 'movable wooden siege tower'. The Old French word itself is derived from Middle High German , 'protecting shelter' (cf. the cognate bergfried), combining the Proto-Germanic , 'to protect', or , 'mountain, high place', with , 'peace; personal security', to create , lit. 'high place of security' or 'that which watches over peace'. The etymology was forgotten with time, which led to a variety of folk etymologies and spellings, with the initial meaning being lost in the process, and sometime between the late 13th and the mid-15th century the new sense of "bell tower" was adopted in Anglo-Latin and Middle English. This new and current meaning came as a result of the common association with bell. Merriam-Webster explains the transformation by the fact that the initial word was later used for different types of towers and protective buildings, many containing bells. People associated the belfrey with bells, and by dissimilation or by association the word was successively spelled bellfrey, belfrey, and finally belfry. In larger towns, explains Kingsley Amis, watchmen placed in towers were also on the lookout for fires. Though flags were used by the watchmen for communication, these towers usually contained an alarm bell or bells built into a bell-cot, thus Middle English speakers thought had something to do with bells: they altered it to belfry, an interesting example of the process of folk etymology.
In Medieval Latin, the variants , , and are known. Today's Dutch belfort combines the term bell with the term stronghold. It was a watchtower that a city was permitted to build in its defence, while the Dutch term klokkenstoel ('bell-chair') refers only to the construction of the hanging system, or the way the bell or bells are installed within the tower. The Old French or alike has become in modern French.
Gallery
See also
Bats in the belfry (disambiguation)
Belfries of Belgium and France, a UNESCO World Heritage Site in historic Flanders which is a collection of historical belfries.
Shōrō
References
Bells (percussion)
Architectural elements | Belfry | [
"Technology",
"Engineering"
] | 599 | [
"Building engineering",
"Architectural elements",
"Components",
"Architecture"
] |
380,405 | https://en.wikipedia.org/wiki/Perspective%20%28graphical%29 | Linear or point-projection perspective () is one of two types of graphical projection perspective in the graphic arts; the other is parallel projection. Linear perspective is an approximate representation, generally on a flat surface, of an image as it is seen by the eye. Perspective drawing is useful for representing a three-dimensional scene in a two-dimensional medium, like paper. It is based on the optical fact that for a person an object looks N times (linearly) smaller if it has been moved N times further from the eye than the original distance was.
The most characteristic features of linear perspective are that objects appear smaller as their distance from the observer increases, and that they are subject to , meaning that an object's dimensions parallel to the line of sight appear shorter than its dimensions perpendicular to the line of sight. All objects will recede to points in the distance, usually along the horizon line, but also above and below the horizon line depending on the view used.
Italian Renaissance painters and architects including Filippo Brunelleschi, Leon Battista Alberti, Masaccio, Paolo Uccello, Piero della Francesca and Luca Pacioli studied linear perspective, wrote treatises on it, and incorporated it into their artworks.
Overview
Linear or point-projection perspective works by putting an imaginary flat plane that is close to an object under observation and directly facing an observer's eyes (i.e., the observer is on a normal, or perpendicular line to the plane). Then draw straight lines from every point in the object to the observer. The area on the plane where those lines pass through the plane is a point-projection prospective image resembling what is seen by the observer.
Examples of one-point perspective
Examples of two-point perspective
Examples of three-point perspective
Examples of curvilinear perspective
Additionally, a central vanishing point can be used (just as with one-point perspective) to indicate frontal (foreshortened) depth.
History
Early history
The earliest art paintings and drawings typically sized many objects and characters hierarchically according to their spiritual or thematic importance, not their distance from the viewer, and did not use foreshortening. The most important figures are often shown as the highest in a composition, also from hieratic motives, leading to the so-called "vertical perspective", common in the art of Ancient Egypt, where a group of "nearer" figures are shown below the larger figure or figures; simple overlapping was also employed to relate distance. Additionally, oblique foreshortening of round elements like shields and wheels is evident in Ancient Greek red-figure pottery.
Systematic attempts to evolve a system of perspective are usually considered to have begun around the fifth century BC in the art of ancient Greece, as part of a developing interest in illusionism allied to theatrical scenery. This was detailed within Aristotle's Poetics as skenographia: using flat panels on a stage to give the illusion of depth. The philosophers Anaxagoras and Democritus worked out geometric theories of perspective for use with skenographia. Alcibiades had paintings in his house designed using skenographia, so this art was not confined merely to the stage. Euclid in his Optics () argues correctly that the perceived size of an object is not related to its distance from the eye by a simple proportion. In the first-century BC frescoes of the Villa of P. Fannius Synistor, multiple vanishing points are used in a systematic but not fully consistent manner.
Chinese artists made use of oblique projection from the first or second century until the 18th century. It is not certain how they came to use the technique; Dubery and Willats (1983) speculate that the Chinese acquired the technique from India, which acquired it from Ancient Rome, while others credit it as an indigenous invention of Ancient China. Oblique projection is also seen in Japanese art, such as in the Ukiyo-e paintings of Torii Kiyonaga (1752–1815).
By the later periods of antiquity, artists, especially those in less popular traditions, were well aware that distant objects could be shown smaller than those close at hand for increased realism, but whether this convention was actually used in a work depended on many factors. Some of the paintings found in the ruins of Pompeii show a remarkable realism and perspective for their time. It has been claimed that comprehensive systems of perspective were evolved in antiquity, but most scholars do not accept this. Hardly any of the many works where such a system would have been used have survived. A passage in Philostratus suggests that classical artists and theorists thought in terms of "circles" at equal distance from the viewer, like a classical semi-circular theatre seen from the stage. The roof beams in rooms in the Vatican Virgil, from about 400 AD, are shown converging, more or less, on a common vanishing point, but this is not systematically related to the rest of the composition.
Medieval artists in Europe, like those in the Islamic world and China, were aware of the general principle of varying the relative size of elements according to distance, but even more than classical art were perfectly ready to override it for other reasons. Buildings were often shown obliquely according to a particular convention. The use and sophistication of attempts to convey distance increased steadily during the period, but without a basis in a systematic theory. Byzantine art was also aware of these principles, but also used the reverse perspective convention for the setting of principal figures. Ambrogio Lorenzetti painted a floor with convergent lines in his Presentation at the Temple (1342), though the rest of the painting lacks perspective elements.
Renaissance
It is generally accepted that Filippo Brunelleschi conducted a series of experiments between 1415 and 1420, which included making drawings of various Florentine buildings in correct perspective. According to Vasari and Antonio Manetti, in about 1420, Brunelleschi demonstrated his discovery of prospective by having people look through a hole on his painting from the backside. Through it, they would see a building such as the Florence Baptistery for which the painting was made. When Brunelleschi lifted a mirror between the building and the painting, the mirror reflected the painting to an observer looking through the hole, so that the observer can compare how similar the building and the painting of it are. (The vanishing point is centered from the perspective of an experiment participant.) Brunelleschi applied this new system of perspective to his paintings around 1425.
This scenario is indicative, but faces several problems, that are still debated. First of all, nothing can be said for certain about the correctness of his perspective construction of the Baptistery of San Giovanni, because Brunelleschi's panel is lost. Second, no other perspective painting or drawing by Brunelleschi is known. (In fact, Brunelleschi was not known to have painted at all.) Third, in the account written by Antonio Manetti in his Vita di Ser Brunellesco at the end of the 15th century on Brunelleschi's panel, there is not a single occurrence of the word "experiment". Fourth, the conditions listed by Manetti are contradictory with each other. For example, the description of the eyepiece sets a visual field of 15°, much narrower than the visual field resulting from the urban landscape described.
Soon after Brunelleschi's demonstrations, nearly every interested artist in Florence and in Italy used geometrical perspective in their paintings and sculpture, notably Donatello, Masaccio,Lorenzo Ghiberti, Masolino da Panicale, Paolo Uccello, and Filippo Lippi. Not only was perspective a way of showing depth, it was also a new method of creating a composition. Visual art could now depict a single, unified scene, rather than a combination of several. Early examples include Masolino's St. Peter Healing a Cripple and the Raising of Tabitha (), Donatello's The Feast of Herod (), as well as Ghiberti's Jacob and Esau and other panels from the east doors of the Florence Baptistery. Masaccio (d. 1428) achieved an illusionistic effect by placing the vanishing point at the viewer's eye level in his Holy Trinity (), and in The Tribute Money, it is placed behind the face of Jesus. In the late 15th century, Melozzo da Forlì first applied the technique of foreshortening (in Rome, Loreto, Forlì and others).
This overall story is based on qualitative judgments, and would need to be faced against the material evaluations that have been conducted on Renaissance perspective paintings.
Apart from the paintings of Piero della Francesca, which are a model of the genre, the majority of 15th century works show serious errors in their geometric construction. This is true of Masaccio's Trinity fresco and of many works, including those by renowned artists like Leonardo da Vinci.
As shown by the quick proliferation of accurate perspective paintings in Florence, Brunelleschi likely understood (with help from his friend the mathematician Toscanelli), but did not publish, the mathematics behind perspective. Decades later, his friend Leon Battista Alberti wrote (), a treatise on proper methods of showing distance in painting. Alberti's primary breakthrough was not to show the mathematics in terms of conical projections, as it actually appears to the eye. Instead, he formulated the theory based on planar projections, or how the rays of light, passing from the viewer's eye to the landscape, would strike the picture plane (the painting). He was then able to calculate the apparent height of a distant object using two similar triangles. The mathematics behind similar triangles is relatively simple, having been long ago formulated by Euclid. Alberti was also trained in the science of optics through the school of Padua and under the influence of Biagio Pelacani da Parma who studied Alhazen's Book of Optics. This book, translated around 1200 into Latin, had laid the mathematical foundation for perspective in Europe.
Piero della Francesca elaborated on De pictura in his De Prospectiva pingendi in the 1470s, making many references to Euclid. Alberti had limited himself to figures on the ground plane and giving an overall basis for perspective. Della Francesca fleshed it out, explicitly covering solids in any area of the picture plane. Della Francesca also started the now common practice of using illustrated figures to explain the mathematical concepts, making his treatise easier to understand than Alberti's. Della Francesca was also the first to accurately draw the Platonic solids as they would appear in perspective. Luca Pacioli's 1509 Divina proportione (Divine Proportion), illustrated by Leonardo da Vinci, summarizes the use of perspective in painting, including much of Della Francesca's treatise. Leonardo applied one-point perspective as well as shallow focus to some of his works.
Two-point perspective was demonstrated as early as 1525 by Albrecht Dürer, who studied perspective by reading Piero and Pacioli's works, in his Unterweisung der Messung ("Instruction of the Measurement").
Limitations
Perspective images are created with reference to a particular center of vision for the picture plane. In order for the resulting image to appear identical to the original scene, a viewer must view the image from the exact vantage point used in the calculations relative to the image. When viewed from a different point, this cancels out what would appear to be distortions in the image. For example, a sphere drawn in perspective will be stretched into an ellipse. These apparent distortions are more pronounced away from the center of the image as the angle between a projected ray (from the scene to the eye) becomes more acute relative to the picture plane. Artists may choose to "correct" perspective distortions, for example by drawing all spheres as perfect circles, or by drawing figures as if centered on the direction of view. In practice, unless the viewer observes the image from an extreme angle, like standing far to the side of a painting, the perspective normally looks more or less correct. This is referred to as "Zeeman's Paradox".
See also
Anamorphosis
Camera angle
Cutaway drawing
Perspective control
Trompe-l'œil
Uki-e
Zograscope
Notes
References
Sources
Further reading
External links
Teaching Perspective in Art and Mathematics through Leonardo da Vinci's Work at Mathematical Association of America
Metaphysical Perspective in Ancient Roman-Wall Painting
How to Draw a Two Point Perspective Grid at Creating Comics
Perspective projection
Technical drawing
Functions and mappings
Composition in visual art
Italian inventions | Perspective (graphical) | [
"Mathematics",
"Engineering"
] | 2,580 | [
"Mathematical analysis",
"Functions and mappings",
"Design engineering",
"Mathematical objects",
"Civil engineering",
"Mathematical relations",
"Technical drawing"
] |
381,750 | https://en.wikipedia.org/wiki/Hilbert%27s%20sixteenth%20problem | Hilbert's 16th problem was posed by David Hilbert at the Paris conference of the International Congress of Mathematicians in 1900, as part of his list of 23 problems in mathematics.
The original problem was posed as the Problem of the topology of algebraic curves and surfaces (Problem der Topologie algebraischer Kurven und Flächen).
Actually the problem consists of two similar problems in different branches of mathematics:
An investigation of the relative positions of the branches of real algebraic curves of degree n (and similarly for algebraic surfaces).
The determination of the upper bound for the number of limit cycles in two-dimensional polynomial vector fields of degree n and an investigation of their relative positions.
The first problem is yet unsolved for n = 8. Therefore, this problem is what usually is meant when talking about Hilbert's sixteenth problem in real algebraic geometry. The second problem also remains unsolved: no upper bound for the number of limit cycles is known for any n > 1, and this is what usually is meant by Hilbert's sixteenth problem in the field of dynamical systems.
The Spanish Royal Society for Mathematics published an explanation of Hilbert's sixteenth problem.
The first part of Hilbert's 16th problem
In 1876, Harnack investigated algebraic curves in the real projective plane and found that curves of degree n could have no more than
separate connected components. Furthermore, he showed how to construct curves that attained that upper bound, and thus that it was the best possible bound. Curves with that number of components are called M-curves.
Hilbert had investigated the M-curves of degree 6, and found that the 11 components always were grouped in a certain way. His challenge to the mathematical community now was to completely investigate the possible configurations of the components of the M-curves.
Furthermore, he requested a generalization of Harnack's curve theorem to algebraic surfaces and a similar investigation of surfaces with the maximum number of components.
The second part of Hilbert's 16th problem
Here we are going to consider polynomial vector fields in the real plane, that is a system of differential equations of the form:
where both P and Q are real polynomials of degree n.
These polynomial vector fields were studied by Poincaré, who had the idea of abandoning the search for finding exact solutions to the system, and instead attempted to study the qualitative features of the collection of all possible solutions.
Among many important discoveries, he found that the limit sets of such solutions need not be a stationary point, but could rather be a periodic solution. Such solutions are called limit cycles.
The second part of Hilbert's 16th problem is to decide an upper bound for the number of limit cycles in polynomial vector fields of degree n and, similar to the first part, investigate their relative positions.
Results
It was shown in 1991/1992 by Yulii Ilyashenko and Jean Écalle that every polynomial vector field in the plane has only finitely many limit cycles (a 1923 article by Henri Dulac claiming a proof of this statement had been shown to contain a gap in 1981). This statement is not obvious, since it is easy to construct smooth (C∞) vector fields in the plane with infinitely many concentric limit cycles.
The question whether there exists a finite upper bound H(n) for the number of limit cycles of planar polynomial vector fields of degree n remains unsolved for any n > 1. (H(1) = 0 since linear vector fields do not have limit cycles.) Evgenii Landis and Ivan Petrovsky claimed a solution in the 1950s, but it was shown wrong in the early 1960s. Quadratic plane vector fields with four limit cycles are known. An example of numerical visualization of four limit cycles in a quadratic plane vector field can be found in. In general, the difficulties in estimating the number of limit cycles by numerical integration are due to the nested limit cycles with very narrow regions of attraction, which are hidden attractors, and semi-stable limit cycles.
The original formulation of the problems
In his speech, Hilbert presented the problems as:
Hilbert continues:
See also
Hilbert–Arnold problem
Hilbert's problems
References
External links
16th Hilbert problem: computation of Lyapunov quantities and limit cycles in two-dimensional dynamical systems
16
Unsolved problems in geometry
Real algebraic geometry
Dynamical systems
Hidden oscillation | Hilbert's sixteenth problem | [
"Physics",
"Mathematics"
] | 879 | [
"Geometry problems",
"Unsolved problems in mathematics",
"Unsolved problems in geometry",
"Hilbert's problems",
"Mechanics",
"Mathematical problems",
"Hidden oscillation",
"Dynamical systems"
] |
381,805 | https://en.wikipedia.org/wiki/Franck%E2%80%93Hertz%20experiment | The Franck–Hertz experiment was the first electrical measurement to clearly show the quantum nature of atoms. It was presented on April 24, 1914, to the German Physical Society in a paper by James Franck and Gustav Hertz. Franck and Hertz had designed a vacuum tube for studying energetic electrons that flew through a thin vapor of mercury atoms. They discovered that, when an electron collided with a mercury atom, it could lose only a specific quantity (4.9 electron volts) of its kinetic energy before flying away. This energy loss corresponds to decelerating the electron from a speed of about 1.3 million metres per second to zero. A faster electron does not decelerate completely after a collision, but loses precisely the same amount of its kinetic energy. Slower electrons merely bounce off mercury atoms without losing any significant speed or kinetic energy.
These experimental results proved to be consistent with the Bohr model for atoms that had been proposed the previous year by Niels Bohr. The Bohr model was a precursor of quantum mechanics and of the electron shell model of atoms. Its key feature was that an electron inside an atom occupies one of the atom's "quantum energy levels". Before the collision, an electron inside the mercury atom occupies its lowest available energy level. After the collision, the electron inside occupies a higher energy level with 4.9 electronvolts (eV) more energy. This means that the electron is more loosely bound to the mercury atom. There were no intermediate levels or possibilities in Bohr's quantum model. This feature was "revolutionary" because it was inconsistent with the expectation that an electron could be bound to an atom's nucleus by any amount of energy.
In a second paper presented in May 1914, Franck and Hertz reported on the light emission by the mercury atoms that had absorbed energy from collisions. They showed that the wavelength of this ultraviolet light corresponded exactly to the 4.9 eV of energy that the flying electron had lost. The relationship of energy and wavelength had also been predicted by Bohr because he had followed the structure laid out by Hendrik Lorentz at the 1911 Solvay Congress. At Solvay, Hendrik Lorentz suggested after Einstein’s talk on quantum structure that the energy of a rotator be set equal to nhv. Therefore, Bohr had followed the instructions given in 1911 and copied the formula proposed by Lorentz and others into his 1913 atomic model. Lorentz had been correct. The quantization of the atoms matched his formula incorporated into the Bohr model. After a presentation of these results by Franck a few years later, Albert Einstein is said to have remarked, "It's so lovely it makes you cry."
On December 10, 1926, Franck and Hertz were awarded the 1925 Nobel Prize in Physics "for their discovery of the laws governing the impact of an electron upon an atom".
Experiment
Franck and Hertz's original experiment used a heated vacuum tube containing a drop of mercury; they reported a tube temperature of 115 °C, at which the vapor pressure of mercury is about 100 pascals (about a thousandth of the atmospheric pressure). A contemporary Franck–Hertz tube is shown in the photograph. It is fitted with three electrodes: an electron-emitting, hot cathode; a metal mesh grid; and an anode. The grid's voltage is positive relative to the cathode, so that electrons emitted from the hot cathode are drawn to it. The electric current measured in the experiment is due to electrons that pass through the grid and reach the anode. The anode's electric potential is slightly negative relative to the grid, so that electrons that reach the anode have at least a corresponding amount of kinetic energy after passing the grid.
The graphs published by Franck and Hertz (see figure) show the dependence of the electric current flowing out of the anode upon the electric potential between the grid and the cathode.
At low potential differences—up to 4.9 volts—the current through the tube increased steadily with increasing potential difference. This behavior is typical of true vacuum tubes that don't contain mercury vapor; larger voltages lead to larger "space-charge limited current".
At 4.9 volts the current drops sharply, almost back to zero.
The current then increases steadily once again as the voltage is increased further, until 9.8 volts is reached (exactly 4.9+4.9 volts).
At 9.8 volts a similar sharp drop is observed.
While it isn't evident in the original measurements of the figure, this series of dips in current at approximately 4.9 volt increments continues to potentials of at least 70 volts.
Franck and Hertz noted in their first paper that the 4.9 eV characteristic energy of their experiment corresponded well to one of the wavelengths of light emitted by mercury atoms in gas discharges. They were using a quantum relationship between the energy of excitation and the corresponding wavelength of light, which they broadly attributed to Johannes Stark and to Arnold Sommerfeld; it predicts that 4.9 eV corresponds to light with a 254 nm wavelength. The same relationship was also incorporated in Einstein's 1905 photon theory of the photoelectric effect. In a second paper, Franck and Hertz reported the optical emission from their tubes, which emitted light with a single prominent wavelength 254 nm. The figure at the right shows the spectrum of a Franck–Hertz tube; nearly all of the light emitted has a single wavelength. For reference, the figure also shows the spectrum for a mercury gas discharge light, which emits light at several wavelengths besides 254 nm. The figure is based on the original spectra published by Franck and Hertz in 1914. The fact that the Franck–Hertz tube emitted just the single wavelength, corresponding nearly exactly to the voltage period they had measured, was very important.
Modeling of electron collisions with atoms
Franck and Hertz explained their experiment in terms of elastic and inelastic collisions between the electrons and the mercury atoms. Slowly moving electrons collide elastically with the mercury atoms. This means that the direction in which the electron is moving is altered by the collision, but its speed is unchanged. An elastic collision is illustrated in the figure, where the length of the arrow indicates the electron's speed. The mercury atom is unaffected by the collision, mostly because it is about four hundred thousand times more massive than an electron.
When the speed of the electron exceeds about 1.3 million metres per second, collisions with a mercury atom become inelastic. This speed corresponds to a kinetic energy of 4.9 eV, which is deposited into the mercury atom. As shown in the figure, the electron's speed is reduced, and the mercury atom becomes "excited". A short time later, the 4.9 eV of energy that was deposited into the mercury atom is released as ultraviolet light that has a wavelength of precisely 254 nm. Following light emission, the mercury atom returns to its original, unexcited state.
If electrons emitted from the cathode flew freely until they arrived at the grid, they would acquire a kinetic energy that's proportional to the voltage applied to the grid. 1 eV of kinetic energy corresponds to a potential difference of 1 volt between the grid and the cathode. Elastic collisions with the mercury atoms increase the time it takes for an electron to arrive at the grid, but the average kinetic energy of electrons arriving there isn't much affected.
When the grid voltage reaches 4.9 V, electron collisions near the grid become inelastic, and the electrons are greatly slowed. The kinetic energy of a typical electron arriving at the grid is reduced so much that it cannot travel further to reach the anode, whose voltage is set to slightly repel electrons. The current of electrons reaching the anode falls, as seen in the graph. Further increases in the grid voltage restore enough energy to the electrons that suffered inelastic collisions that they can again reach the anode. The current rises again as the grid potential rises beyond 4.9 V. At 9.8 V, the situation changes again. Electrons that have traveled roughly halfway from the cathode to the grid have already acquired enough energy to suffer a first inelastic collision. As they continue slowly towards the grid from the midway point, their kinetic energy builds up again, but as they reach the grid they can suffer a second inelastic collision. Once again, the current to the anode drops. At intervals of 4.9 volts this process will repeat; each time the electrons will undergo one additional inelastic collision.
Early quantum theory
While Franck and Hertz were unaware of it when they published their experiments in 1914, in 1913 Niels Bohr had published a model for atoms that was very successful in accounting for the optical properties of atomic hydrogen. These were usually observed in gas discharges, which emitted light at a series of wavelengths. Ordinary light sources like incandescent light bulbs emit light at all wavelengths. Bohr had calculated the wavelengths emitted by hydrogen very accurately.
The fundamental assumption of the Bohr model concerns the possible binding energies of an electron to the nucleus of an atom. The atom can be ionized if a collision with another particle supplies at least this binding energy. This frees the electron from the atom, and leaves a positively charged ion behind. There is an analogy with satellites orbiting the Earth. Every satellite has its own orbit, and practically any orbital distance, and any satellite binding energy, is possible. Since an electron is attracted to the positive charge of the atomic nucleus by a similar force, so-called "classical" calculations suggest that any binding energy should also be possible for electrons. However, Bohr assumed that only a specific series of binding energies occur, which correspond to the "quantum energy levels" for the electron. An electron is normally found in the lowest energy level, with the largest binding energy. Additional levels lie higher, with smaller binding energies. Intermediate binding energies lying between these levels are not permitted. This was a revolutionary assumption.
Franck and Hertz had proposed that the 4.9 V characteristic of their experiments was due to ionization of mercury atoms by collisions with the flying electrons emitted at the cathode. In 1915 Bohr published a paper noting that the measurements of Franck and Hertz were more consistent with the assumption of quantum levels in his own model for atoms. In the Bohr model, the collision excited an internal electron within the atom from its lowest level to the first quantum level above it. The Bohr model also predicted that light would be emitted as the internal electron returned from its excited quantum level to the lowest one; its wavelength corresponded to the energy difference of the atom's internal levels, which has been called the Bohr relation. Franck and Hertz's observation of emission from their tube at 254 nm was also consistent with Bohr's perspective. Writing following the end of World War I in 1918, Franck and Hertz had largely adopted the Bohr perspective for interpreting their experiment, which has become one of the experimental pillars of quantum mechanics. As Abraham Pais described it, "Now the beauty of Franck and Hertz's work lies not only in the measurement of the energy loss E2-E1 of the impinging electron, but they also observed that, when the energy of that electron exceeds 4.9 eV, mercury begins to emit ultraviolet light of a definite frequency ν as defined in the above formula. Thereby they gave (unwittingly at first) the first direct experimental proof of the Bohr relation!" Franck himself emphasized the importance of the ultraviolet emission experiment in an epilogue to the 1960 Physical Science Study Committee (PSSC) film about the Franck–Hertz experiment.
Experiment with neon
In instructional laboratories, the Franck–Hertz experiment is often done using neon gas, which shows the onset of inelastic collisions with a visible orange glow in the vacuum tube, and which also is non-toxic, should the tube be broken. With mercury tubes, the model for elastic and inelastic collisions predicts that there should be narrow bands between the anode and the grid where the mercury emits light, but the light is ultraviolet and invisible. With neon, the Franck–Hertz voltage interval is 18.7 volts, and an orange glow appears near the grid when 18.7 volts is applied. This glow will move closer to the cathode with increasing accelerating potential, and indicates the locations where electrons have acquired the 18.7 eV required to excite a neon atom. At 37.4 volts two distinct glows will be visible: one midway between the cathode and grid, and one right at the accelerating grid. Higher potentials, spaced at 18.7 volt intervals, will result in additional glowing regions in the tube.
An additional advantage of neon for instructional laboratories is that the tube can be used at room temperature. However, the wavelength of the visible emission is much longer than predicted by the Bohr relation and the 18.7 V interval. A partial explanation for the orange light involves two atomic levels lying 16.6 eV and 18.7 eV above the lowest level. Electrons excited to the 18.7 eV level fall to the 16.6 eV level, with concomitant orange light emission.
References
Further reading
Selection of images of a vacuum tube used for the Franck–Hertz experiment in instructional laboratories.
Translation of Franck's Nobel lecture that he gave December 11, 1926.
Translation of Hertz's Nobel lecture that he gave December 11, 1926.
See also Nicoletopoulos, who died in 2013, had authored and co-authored several papers related to the Franck–Hertz experiment; these papers challenge the conventional interpretations of the experiment. See
Franck and Hertz's original paper reported anode currents up to about 15 V, as illustrated in the figure above. Additional maxima and minima occur when current is measured to higher voltages. This paper notes that the spacing between the minima and maxima isn't exactly 4.9& V, but increases for higher voltages and varies with temperature, and provides a model for this effect.
External links
Physics experiments
Foundational quantum physics
1914 in science | Franck–Hertz experiment | [
"Physics"
] | 2,928 | [
"Physics experiments",
"Foundational quantum physics",
"Experimental physics",
"Quantum mechanics"
] |
11,271,178 | https://en.wikipedia.org/wiki/Transient%20state | In systems theory, a system is said to be transient or in a transient state when a process variable or variables have been changed and the system has not yet reached a steady state. In electrical engineering, the time taken for an electronic circuit to change from one steady state to another steady state is called the transient time.
Examples
Chemical Engineering
When a chemical reactor is being brought into operation, the concentrations, temperatures, species compositions, and reaction rates are changing with time until operation reaches its nominal process variables.
Electrical engineering
When a switch is closed in an electrical circuit containing a capacitor or inductor, the component draws out the resulting change in voltage or current, causing the system to take a substantial amount of time to reach a new steady state. This period of time is known as the transient state.
A capacitor acts as a short circuit immediately after the switch is closed, increasing its impedance during the transient state until it acts as an open circuit in its steady state.
An inductor is the opposite, behaving as an open circuit until reaching a short circuit steady state.
See also
Attractor
Carrying capacity
Control theory
Dynamical system
Ecological footprint
Economic growth
Engine test stand
Equilibrium point
List of types of equilibrium
Evolutionary economics
Growth curve
Herman Daly
Homeostasis
Lead-lag compensator
Limit cycle
Limits to Growth
Population dynamics
Race condition
Simulation
State function
Steady state
Steady state economy
Steady State theory
Systems theory
Thermodynamic equilibrium
Transient modelling
Transient response
References
Chemical process engineering
Electrical engineering
Systems theory
Control theory | Transient state | [
"Chemistry",
"Mathematics",
"Engineering"
] | 305 | [
"Applied mathematics",
"Control theory",
"Chemical engineering",
"Chemical process engineering",
"Chemical reaction stubs",
"Electrical engineering",
"Dynamical systems"
] |
11,271,197 | https://en.wikipedia.org/wiki/Transient%20%28civil%20engineering%29 | In civil engineering, a transient is a short-lived pressure wave. A common example is water hammer.
Transients are often misunderstood and not accounted for in the design of water distribution systems, thus contributing to hydraulic element failures, such as pipe breaks and pump/valve failures.
Vasoelastic transient flow involves sudden changes in flow properties in VE pipes, leading to potential damage
The transient in electrical circuits is different.
External links
Journal of Applied Fluid Transients (JAFT)
References
Hydraulic engineering | Transient (civil engineering) | [
"Physics",
"Engineering",
"Environmental_science"
] | 100 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Civil engineering stubs",
"Hydraulic engineering"
] |
11,273,068 | https://en.wikipedia.org/wiki/Growing%20block%20universe | The growing block universe, or the growing block view, is a theory of time arguing that the past and present both exist, and the future as yet does not. The present is an objective property, to be compared with a moving spotlight. By the passage of time more of the world comes into being; therefore, the block universe is said to be growing. The growth of the block is supposed to happen in the present, a very thin slice of spacetime, where more of spacetime is continually coming into being. Growing block theory should not be confused with block universe theory, also known as eternalism.
The growing block view is an alternative to both eternalism (according to which past, present, and future all exist) and presentism (according to which only the present exists). It is held to be closer to common-sense intuitions than the alternatives. C. D. Broad was a proponent of the theory (1923). Some modern defenders are Michael Tooley (in 1997) and Peter Forrest (in 2004). Fabrice Correia and Sven Rosenkranz (2015) have developed their own distinctive view of this theory.
Overview
Broad first proposed the theory in 1923. He described the theory as follows:
It will be observed that such a theory as this accepts the reality of the present and the past, but holds that the future is simply nothing at all. Nothing has happened to the present by becoming past except that fresh slices of existence have been added to the total history of the world. The past is thus as real as the present. On the other hand, the essence of a present event is, not that it precedes future events, but that there is quite literally nothing to which it has the relation of precedence. The sum total of existence is always increasing, and it is this which gives the time-series a sense as well as an order. A moment t is later than a moment t' if the sum total of existence at t includes the sum total of existence at t' together with something more.
This dynamic theory of time conforms with the common-sense intuition that the past is fixed, the future is unreal, and the present is constantly changing. The theory resolves the paradox that time has a beginning but does not seem to have an end. There are also other reasons for supporting the growing block view of time that go beyond the common-sense. For example, Tooley bases his argument on the causal relation. His main argument as outlined by Dainton is as follows:
Events in our world are causally related.
The causal relation is inherently asymmetrical. Effects depend on their causes in a way that causes do not depend on their effects.
This asymmetry is only possible if a cause's effects are not real as of the time of their cause.
Causes occur before their effects. "X is earlier than Y" means roughly that some event simultaneous with X causes some event simultaneous with Y.
Our universe must therefore be a growing block.
Criticism
In the 21st century, several philosophers, such as David Braddon-Mitchell (2004), Craig Bourne, and Trenton Merricks, observed that if the growing block view is correct then it must be to concluded that it is not whether now is now. The first occurrence of "now" is an indexical and the second occurrence of "now" is the objective tensed property. Their observation implies the following sentence: "This part of spacetime has the property of being present." For example, Socrates discussing in the past with Gorgias, and at the same time thinking that the discussion is occurring now. According to the growing block view, tense is a real property of the world so his thought is about now, the objective present. He thinks tenselessly that his thought is occurring on the edge of being but is wrong because he is in the past; he does not know that now is now, yet how can one be sure they are not in the same position. As there is nothing special with Socrates, it cannot be know whether now is now. Some argued that there is an ontological distinction between the past and the present. For instance, Forrest (2004) argues that although there exists a past, it is lifeless and inactive. Consciousness, as well as the flow of time, is not active within the past and can only occur at the boundary of the block universe in which the present exists in all existence.
See also
An Experiment with Time, which proposes a similar concept
Eternity
Philosophy of space and time
References
Bibliography
External links
Concepts in metaphysics
Conceptual models
Theories of time | Growing block universe | [
"Physics",
"Mathematics"
] | 928 | [
"Physical quantities",
"Time",
"Quantity",
"Philosophy of time",
"Spacetime",
"Wikipedia categories named after physical quantities"
] |
11,273,929 | https://en.wikipedia.org/wiki/Energy%20service%20company | An energy service company (ESCO) is a company that provides a broad range of energy solutions including designs and implementation of energy savings projects, retrofitting, energy conservation, energy infrastructure outsourcing, power generation, energy supply, and risk management.
A newer breed of ESCO includes innovative financing methods, such as off-balance sheet mechanisms, a range of applicable equipment configured in such a way that reduces the energy cost of a building. The ESCO starts by performing an analysis of the property, designs an energy efficient solution, installs the required elements, and maintains the system to ensure energy savings during the payback period. The savings in energy costs are often used to pay back the capital investment of the project over a five to twenty years period or reinvested into the building to allow the capital upgrades that may otherwise be unfeasible. If the project does not provide returns on the investment, the ESCO is often responsible to pay the difference.
History
The beginning
The start of the energy services business can be attributed to the energy crisis of the late 1970s, as entrepreneurs developed ways to combat the rise in energy costs. One of the earliest examples was a company in Texas, Time Energy, which introduced a device to automate the switching of lights and other equipment to regulate energy use. The primary reason that the product did not initially sell was because potential users doubted that the savings would actually rise. To combat this doubt, the company decided to install the device upfront and ask for a percentage of the savings that was accumulated. The result was the basis for the ESCO model. Through this process, the company achieved higher sales and more return since the savings were large.
Industry growth through the 1970s and 1980s
As more entrepreneurs saw this market grow, more companies came into creation. The first wave of ESCOs were often small divisions of large energy companies or small, upstart, independent companies. However, after the energy crisis came to an end, the companies had little leverage on potential clients to perform energy-saving projects, given the lower cost of energy. This prevented the growth experienced in the late 1970s from continuing. The industry grew slowly through the 1970s and 1980s, spurred by specialist firms such as Hospital Efficiency Corporation (HEC Inc.), established in 1982 to focus on the energy intensive medical sector. HEC Inc., later renamed Select Energy Services, was acquired in 1990 by Northeast Utilities, and sold in 2006 to Ameresco.
The 1990s: Utilities and consolidated energy companies become the major players
With the rising cost of energy and the availability of efficiency technologies in lighting, HVAC (heating, ventilation and air conditioning), and building energy management, ESCO projects became much more commonplace. The term ESCO has also become more widely known among potential clients looking to upgrade their building systems that are either outdated and need to be replaced, or for campus and district energy plant upgrades.
With deregulation in the U.S. energy markets in the 1990s, the energy services business experienced a rapid rise. Utilities, which for decades enjoyed the shelter of monopolies with guaranteed returns on power plant investments, now had to compete to supply power to many of their largest customers. They now looked to energy services as a potential new business line to retain their existing large customers. Also, with the new opportunities on the supply side, many energy services companies (ESCOs) started to expand into the generation market, building district power plants or including cogeneration facilities within efficiency projects. For example, in November 1996 BGA, Inc., formerly a privately held, regional energy performance contracting and consulting company was acquired by TECO Energy, and in 2004 was acquired by Chevron Corporation. In 1998, BGA entered the District Energy Plant business, completing construction on the first 3rd-party owned and operated district cooling plant in Florida.
Decade of the 2000s: Consolidation, exit of many utilities
In the wake of the Enron collapse in 2001, and the sputtering or reverse of deregulation efforts, many utilities shut down or sold their energy services businesses. There was a significant consolidation among the remaining independent firms. According to the industry group NAESCO, revenues of ESCOs in the U.S. grew by 22% in 2006, reaching $3.6 billion.
ESCO operating principles
Introduction
An energy service company (ESCO) is a company that provides comprehensive energy solutions to its customers, including auditing, redesigning and implementing changes to the ways the customer consumes energy, the main goal being improved efficiency. Other possible services provided include energy infrastructure outsourcing, energy supply, financing and risk management. It is this comprehensiveness of services that differentiates an ESCO from a common energy company, whose main business is solely providing energy to its customers. Typically compensation to the ESCO is performance based so that the benefits of improved energy efficiency are shared between the client and the ESCO.
ESCOs often use performance contracting, meaning that if the project does not provide returns on the investment, the ESCO is responsible to pay the difference, thus assuring their clients of the energy and cost savings. Therefore, ESCOs are fundamentally different from consulting engineers and equipment contractors: the former are typically paid for their advice, whereas the latter are paid for the equipment, and neither accept any project risk. The risk-free nature of the service the ESCOs provide offers a convincing incentive for their clients to invest.
Some typical characteristic of ESCOs are as follows:
Ownership – ESCOs may be privately owned companies, either independent or part of a large conglomerate, state-owned, nonprofits, joint ventures, manufacturers or manufacturers' subsidiaries.
Clients – ESCOs typically specialize on market niches by sector (industries, utilities, real estate, etc.) and by size (large or small projects).
Technology – Some ESCOs have a technological specialization (e.g. lighting, HVAC, a particular industrial process) whereas others are aim for a holistic approach.
Project financing – Financing capabilities vary with the financial situation of the ESCO. Some have large parent companies, which allows them to self-finance projects. However, all ESCOs rely to some extent on third-party financing.
Developing a project
The energy savings project often begins with the development of ideas that would generate energy savings, and in turn, cost savings. This task is usually the responsibility of the ESCO. The ESCO often approaches a potential client with a proposal of an energy savings project and a performance contract. This ESCO is said to “drive” the project. Once the owner is aware of the possibility of an energy savings project, he or she may choose to place it out for bid, or just stick with the original ESCO. During the initial period of research and investigation, an energy auditor from the ESCO surveys the site and reviews the project's systems to determine areas where cost savings are feasible, usually free of charge to the client. This is the energy audit, and the phase is often referred to as the feasibility study. A hypothesis of the potential project is developed by the client and the auditor, and then the ESCO's engineering development team expands upon and compiles solutions.
This next phase is referred to as the engineering and design phase, which further defines the project and can provide more firm cost and savings estimates. The engineers are responsible for creating cost-effective measures to obtain the highest potential of energy savings. These measures can range from highly efficient lighting and heating/air conditioning upgrades, to more productive motors with variable speed drives and centralized energy management systems. There is a wide array of measures that can produce large energy savings.
Once the project has been developed and a performance contract signed, the construction or implementation phase begins. Following the completion of this phase, the monitoring and maintenance or Measurement and Verification (M & V) phase begins. This phase is the verification of the pre-construction calculations and is used to determine the actual cost savings. This phase is not always included in the performance contract. In fact, there are three options the owner must consider during the performance contract review. These options are, from least to most expensive:
No warranty other than that provided on the equipment.
ESCO provided M & V to show the projected energy savings during the short term following completion.
ESCO provided M & V to show the projected energy savings during the entire payback period.
A typical transaction involves the ESCO borrowing cash to purchase equipment or to implement energy-savings for its clients. The client pays the ESCO its regular energy cost (or a large fraction of it), but the energy savings enable the ESCO to pay only a fraction of that to their energy supplier. The difference goes to pay the interest on the loan and to profit. Typically, ESCOs are able to implement and finance the efficiency improvements better than their client company could by itself.
Choosing an ESCO
Once the project has been defined, but before much of the engineering work has been completed, it may be necessary to choose an ESCO by putting the project “out to bid”. This is usually the case when the client has developed the project on his or her own or is required to allow others to bid on the work as required by the government. The latter is the case on any state or federally funded project. The typical process includes a Request for qualifications (RFQ) in which the interested ESCO's submit their corporate resumes, business profiles, experience, and initial plan. Once received, the client creates a “short list” of 3-5 companies. This list is of the companies whose profile for the project best matches with the owners’ ideas in the RFQ. The client then asks for a Request for Proposal (RFP) that is a much more detailed explanation of the project. This document contains all cost savings measures, products, M & V plans, and the performance contract. The client often allows a minimum of six weeks to compile the information before having it submitted. Once submitted, the Proposals are then reviewed by the client, who may conduct interviews with the applicants. The client then selects the ESCO that presents the best possible solution to the energy project, as determined by the client. A good ESCO will help the owner put all the pieces together from start to finish. According to the Energy Services Coalition,
“A qualified ESCO can help you put the pieces together:
Identify and evaluate energy-saving opportunities;
Develop engineering designs and specifications;
Manage the project from design to installation to monitoring;
Arrange for financing;
Train your staff and provide ongoing maintenance services; and
Guarantee that savings will cover all project costs.”
Energy savings tracking methods
After installing energy conservation measures (ECMs), ESCOs often determine the energy savings resulting from the project and present the savings results to their customers. A common way to calculate energy savings is to measure the flows of energy associated with the ECM, and then to apply spreadsheet calculations to determine savings. For example, a chiller retrofit would require measurements of chilled water supply and return temperatures and kW. The benefit of this approach is that the ECM is isolated, and that only energy flows associated with the ECM itself are considered.
This method is described as Option A or Option B in the International Performance Measurement and Verification Protocol (IPMVP). Table 1 presents the different options. Option A requires some measurement and allows for estimations of some parameters. Option B requires measurement of all parameters. In both options, calculations are done (typically in spreadsheets) to determine energy savings. Option C uses utility bills to determine energy savings.
There are many situations where Option A or Option B (Metering and Calculating) is the best approach to measuring energy savings, however, some ESCOs insist upon only using Option A or Option B, when clearly Option C would be most appropriate. If the ESCO was a lighting contractor, then Option A should work in all cases. Spot measurements of fixtures before and after, agreed upon hours of operation, and simple calculations can be inserted into a spreadsheet that can calculate savings. The same spreadsheet can be used over and over. However, for ESCOs that offer a variety of different retrofits, it is necessary to be able to employ all options so that the best option can be selected for each individual job. Controls Retrofits, or retrofits to HVAC systems are typically excellent candidates for Option C.
After installing the energy conservation measures (ECMs), the savings created from the project must be determined. This process, termed Measurement and Verification (M&V), is frequently performed by the ESCO, but may also be performed by the customer or a third party. The International Performance Measurement and Verification Protocol (IPMVP) is the standard M&V guideline for determining actual savings created by an energy management program. Because savings are the absence of energy use, they cannot be directly measured. IPMVP provides 4 methods for using measurement to reliably determine actual savings. A plan for applying the most appropriate of the 4 general methods to a specific project is typically created and agreed upon by all parties before implementation of the ECMs.
IPMVP Option A – Retrofit Isolation: Key Parameter Measurement
Savings are determined by field measurement of the key performance parameter(s) which define the energy use of the ECM's affected system(s). Parameters not selected for field measurement are estimated.
IPMVP Option B – Retrofit Isolation: All Parameter Measurement
Savings are determined by field measurement of the energy use of the ECM-affected system.
IPMVP Option C – Whole Facility
Savings are determined by measuring energy use at the whole facility or sub-facility level.
IPMVP Option D – Calibrated Simulation
Savings are determined through simulation of the energy use of the facility, or of a sub-facility. The simulation model must be calibrated so that it predicts an energy pattern that approximately matches actual metered data.
Table 1 provides suggested IPMVP options for different project characteristics. For each project, an M&V approach which balances the uncertainty in achieved savings and the cost of the M&V plan should be selected. Some plans include only short term verification approaches and others include repeated measurements for an extended period. Because the expense of determining the amount of savings achieved erodes the benefit of the savings themselves, IPMVP suggests not spending more than 10% of the expected savings on M&V. Often M&V approaches are bundled with other monitoring, support, or maintenance services that help achieve or ensure the savings performance. These costs should not be considered M&V expenses and depending on the project and services details, may greatly exceed 10% of the savings.
Using the savings
Once the project is completed the immediate results of energy savings (often between 15 and 35 percent), and the long term maintenance costs can be put towards the capital investment of upgrading the energy system. This is often how ESCOs and performance contracts work. The initial implementation is done, in a sense, free of charge, with the payment coming from the percentage of the energy savings collected by a financing company or the ESCO. The client may also wish to use some capital investment money to lower that percentage during the payback period. The payback period can range from five to twenty years, depending on the negotiated contract. Most state or federally funded projects have a max payback of 15 years. Once the equipment and project have been paid for, the client may be entitled to the full amount of savings to use at their will. It is also common to see large capital improvements financed through energy savings projects. Upgrades to the mechanical/electrical system, new building envelope components, or even restorations and retrofits may be included in the contract even though they have no effect on the amount of energy savings. By using the energy savings, the client may be able to put the funds once used to pay for energy towards the capital improvement that would otherwise be unfeasible with the currently allotted funding.
U. S. Federal Program: "Super-ESPC"
Since its creation in the 1990s, a single U. S. government program known as "Super-ESPC" (ESPC stands for Energy Savings Performance Contracts) has been responsible for $2.9B in ESCO contracts. The program was modified and reauthorized in December 2008, and sixteen firms were awarded Indefinite delivery/indefinite quantity (IDIQ) contracts for up to $5B each, for total potential energy-savings projects worth $80B.
Grouping the sixteen firms provides a convenient illustration of the industry structure and the ways that each firm generates value through projects that use the ESCO model of energy-savings performance contracts. Equipment-affiliated firms use performance contracting as a sales channel for their products. Utility-affiliated firms offer ESCO projects as a value-added service to attract and retain large customers and generally focus only on their utility footprint. Non-utility energy services companies are product neutral, tend to have a larger geographic footprint, and typically offer a wide range of services from energy retrofits to renewable energy development.
Equipment affiliated
NORESCO (Carrier)
Honeywell Building Solutions SES
Johnson Controls Government Systems, L.L.C. (York)
Schneider Electric
Siemens Government Services, Inc.
Trane
Utility affiliated
ConEdison
Constellation
FPL Energy Services
Pepco Energy Services
Energy Systems Group
Non-utility energy services
Ameresco (Ennovate, Exelon Services Federal Group, E3, APS...Acquired)
The Benham Companies, LLC (SAIC Acquired)
CEG Solutions LLC (formerly Clark Energy Group LLC)
Lockheed Martin Services, Inc.
McKinstry
Brewer Garrett
ESCO 2.0
In June 2005, the GAO released a report, “Energy Savings: Performance Contracts Offer Benefits, But Vigilance Is Needed To Protect Government Interests.” The Office of the Under Secretary of Defense for Technology, Acquisition, and Logistics agreed with the GAO findings. “While these complicated contracts are structured to ensure that savings will exceed costs,” the DOD noted, “we recognize that our measurement and verification procedures must be improved to confirm estimates with actual data.” Unverified savings, often stipulated rather than proven, do not put more oil in the ground, take CO2 out of the air or reduce operating budgets
The GAO ESPC study brings into question whether or not there is sufficient data to prove that the gains delivered by ESCOs are sustainable over time. The study further questions the practice of having ESCOs monitoring and validating the performance of their own projects.
In fact, most buildings and facilities exhibit the same basic limitations with respect to energy conservation and optimum maintenance. US Federal studies show that major and minor building systems routinely fail to meet performance expectations, and these faults often go unnoticed over time. The functions of a building, the number of tenants, and the configuration of the space change over time in unanticipated manners that adversely affect the systems that control building performance.
Surprisingly, almost all buildings, building complexes, and systems inside buildings still operate in a disconnected, stand-alone manner. Proprietary systems result in buildings that needlessly waste energy. Recent studies have found that roughly 30% of LEED certified buildings perform substantially better than anticipated, while 25% perform substantially worse than anticipated. In general, LEED certified buildings perform 25-30% better than non-LEED certified buildings with regards to energy use. It is ultimately difficult or impossible for customers to construct a single integrated picture that correlates energy usage and maintenance costs to control system performance, space usage, conservation measures, and the behavior of those using the facility space.
A more recent phenomenon is the concept of combining the benefits of performance contracting with the benefits of green buildings, affectionately described as green performance contracting. The reason the concept makes sense is because for green buildings, the costliest prerequisites to meet are usually the energy efficiency requirements. The LEED rating system requires buildings to be benchmarked using the EPA EnergyStar system. The minimum score to meet the LEED prerequisites is a score of 75 or greater (meaning the building is in the top 75 percentile of benchmarked buildings). Since performance contracting attempts to find all the sources of energy waste, then a building that has gone through the performance contracting process should meet the LEED prerequisite.
Green performance contracting can be used to achieve sustainability goals in new building design and construction as well as in existing buildings.
New Buildings: Higher-efficiency choices are compared to the modeled performance of the as-designed less-efficient building. Applying performance contracting to buildings being designed and built is the perfect cure for pressure to “value engineer” the efficiency and sustainability out of new buildings as they are designed. In new buildings, performance contracting bridges the gap between the first-cost and life-cycle-cost perspectives by using long-term energy savings to pay for the incremental first-cost of high-efficiency measures.
Existing buildings: Green performance contracting provides a mechanism for implementing and financing the building's efficiency and sustainability upgrades, including improved operations. Achieving sustainable building performance in existing buildings can be done at reasonable costs. If needed, system or building upgrades can be spread out over time and implemented when capital dollars become available.
Green performance contracting provides comprehensive integrated solutions to a wide variety of building, site and infrastructure improvements, and it allows building owners to pay for these building sustainability improvements, including capital improvements or renewable energy, with funds in the organization's expense budget.
The result is a better performing building along with all the public relations and marketing benefits of green buildings.
Retro-commissioning
Studies show that virtually every building suffers from incompletely installed controls systems, excessive chilling and heating capacity, and an inability to obtain the data needed to let senior decision makers understand how a building is really performing. The National Institute of Standards and Technology (NIST) found that an average building lasts only two-thirds of its forecast life before it needs to be replaced or substantially retrofitted. Often the explanation for this cluster of problems is incomplete or improper building commissioning at the beginning of the building's life cycle. (Building commissioning is the start-up process by which every new building's systems are initially configured and calibrated to its occupancy loads to get it up and running.)
According to NIST, the time needed to do building commissioning right is rarely available, defects and opportunities are overlooked, and system potential goes unrealized. Over time equipment performance and control sequences naturally degrade, and substandard performance or even failures of systems and components go unrecognized. The ultimate result is almost universal waste of various kinds, including substantial energy and maintenance cost.
Independent Measurement and Verification
Few, if any, of these factors are addressable by the Energy Services Companies or through ESPCs because the information needed to define the real problems is not captured. There is a clear need for integrated solutions that offer the kind of accountability and transparency — and plenty of the “actual data” — that is currently lacking in the ESPC process. What is needed in fact is an independent means of continuously monitoring performance so that buildings reach peak performance sooner and maintain peak performance over time (as represented by the yellow field in the figure) despite changes in use, maintenance, energy cost, and user behavior.
Key components of ESCO 2.0
Real-time integration and visibility of building management systems, metering subsystems, and asset management applications.
Automated, real-time analysis and reporting of key performance indicators associated with subsystem operations, energy use, and equipment maintenance management.
Recommendations for results-oriented energy usage and maintenance program refinements that will enable energy reduction targets to be met or exceeded.
On-going monitoring of subsystems to continually expand energy conservation efforts and maintenance management improvements for further cost reductions.
Independent verification of ESCO and other Energy Conservation Measures (ECM) programs.
US Federal reporting into OMB Scorecard
UK and European based ESCOs
A number of firms have started offering ESCO services in Europe. As in the US, some belong to utilities, some belong to manufacturers and others are independent.
See also
Efficient energy use
Industrial Assessment Center
RESCO – renewable energy service company
References
External links
AssoESCo - Associazione italiana delle Energy Service Company e degli Operatori dell'Efficienza Energetica
ESCO Europe conference, 20-21, Milan, Italy
New York Times, Sept 1, 2008 Ambit and other ESCOs for consumers
Energy companies
Energy conservation | Energy service company | [
"Engineering"
] | 4,976 | [
"Energy companies",
"Energy organizations"
] |
11,274,275 | https://en.wikipedia.org/wiki/Carbamoyl%20phosphate%20synthetase | Carbamoyl phosphate synthetase catalyzes the ATP-dependent synthesis
of carbamoyl phosphate from glutamine () or ammonia () and bicarbonate. This ATP-grasp enzyme catalyzes the reaction of ATP and bicarbonate to produce carboxy phosphate and ADP. Carboxy phosphate reacts with ammonia to give carbamic acid. In turn, carbamic acid reacts with a second ATP to give carbamoyl phosphate plus ADP.
It represents the first committed step in pyrimidine and arginine biosynthesis in prokaryotes and eukaryotes, and in the urea cycle in most terrestrial vertebrates. Most prokaryotes carry one form of CPSase that participates in both arginine and pyrimidine biosynthesis, however certain bacteria can have separate forms.
There are three different forms that serve very different functions:
Carbamoyl phosphate synthetase I (mitochondria, urea cycle)
Carbamoyl phosphate synthetase II (cytosol, pyrimidine metabolism).
Carbamoyl phosphate synthetase III (found in fish).
Mechanism
Carbamoyl phosphate synthetase has three main steps in its mechanism and is, in essence, irreversible.
Bicarbonate ion is phosphorylated with ATP to create .
The then reacts with ammonia to form carbamic acid, releasing inorganic phosphate.
A second molecule of ATP then phosphorylates carbamic acid, creating carbamoyl phosphate.
The activity of the enzyme is known to be inhibited by both Tris and HEPES buffers.
Structure
Carbamoyl phosphate synthase (CPSase) is a heterodimeric enzyme composed of a small and a large subunit (with the exception of CPSase III, which is composed of a single polypeptide that may have arisen from gene fusion of the glutaminase and synthetase domains). CPSase has three active sites, one in the small subunit and two in the large subunit. The small subunit contains the glutamine binding site and catalyses the hydrolysis of glutamine to glutamate and ammonia, which is in turn used by the large chain to synthesize carbamoyl phosphate. The small subunit has a 3-layer beta/beta/alpha structure, and is thought to be mobile in most proteins that carry it. The C-terminal domain of the small subunit of CPSase has glutamine amidotransferase activity. The large subunit has two homologous carboxy phosphate domains, both of which have ATP-binding sites; however, the N-terminal carboxy phosphate domain catalyses the phosphorylation of , while the C-terminal domain catalyses the phosphorylation of the carbamate intermediate. The carboxy phosphate domain found duplicated in the large subunit of CPSase is also present as a single copy in the biotin-dependent enzymes acetyl-CoA carboxylase (ACC), propionyl-CoA carboxylase (PCCase), pyruvate carboxylase (PC) and urea carboxylase.
The large subunit in bacterial CPSase has four structural domains: the carboxy phosphate domain 1, the oligomerisation domain, the carbamoyl phosphate domain 2 and the allosteric domain. CPSase heterodimers from Escherichia coli contain two molecular tunnels: an ammonia tunnel and a carbamate tunnel. These inter-domain tunnels connect the three distinct active sites, and function as conduits for the transport of unstable reaction intermediates (ammonia and carbamate) between successive active sites. The catalytic mechanism of CPSase involves the diffusion of carbamate through the interior of the enzyme from the site of synthesis within the N-terminal domain of the large subunit to the site of phosphorylation within the C-terminal domain.
References
External links
GeneReviews/NCBI/NIH/UW entry on Urea Cycle Disorders Overview
Urea cycle
Protein domains
EC 6.3.4 | Carbamoyl phosphate synthetase | [
"Biology"
] | 883 | [
"Protein domains",
"Protein classification"
] |
11,275,526 | https://en.wikipedia.org/wiki/Audio%20router | An audio router is a device that transports audio signals from inputs to outputs.
Inputs and Outputs
The number of inputs and outputs varies dramatically. The way routers are described is normally number of inputs by number of outputs e.g. 2×1, 256×256.
Signals
The type of signals transported - switched can be analogue - Analog - audio signals or Digital. Digital audio usually is in the AES/EBU standard for broadcast use. Broadband routers can route more than one signal type e.g. analogue or more than one type of digital.
Crosspoints
Because any of the inputs can be routed to any output, the internal arrangement of the router is arranged as a number of crosspoints which can be activated to pass the corresponding signal to the desired output.
Some Manufacturers of audio routers
Lawo
Datavideo
Imagine Communications
AEQ
FOR-A
Klotz Digital
NVISION
Panasonic
Philips
Ross Video
Snell & Wilcox
Sony
Thomson Grass Valley
Utah Scientific
Matrix Switch Corporation
See also
Video router
Vision mixer
Television technology
Television terminology | Audio router | [
"Technology"
] | 213 | [
"Information and communications technology",
"Television technology"
] |
11,278,410 | https://en.wikipedia.org/wiki/Booster%20%28electric%20power%29 | A booster was a motor–generator (MG) set used for voltage regulation in direct current (DC) electrical power circuits. The development of alternating current and solid-state devices has rendered it obsolete. Boosters were made in various configurations to suit different applications.
Line booster
In the days of direct current mains, voltage drop along the line was a problem so line boosters were used to correct it. Suppose that the mains voltage was 110 V. Houses near the power station would receive 110 volts but those remote from the power station might receive only 100 V so a line booster would be inserted at an appropriate point to "boost" the voltage. It consisted of a motor, connected in parallel with the mains, driving a generator, in series with the mains. The motor ran at the depleted mains voltage of 100 V and the generator added another 10 V to restore the voltage to 110 V. This was an inefficient system and was made obsolete by the development of alternating current mains, which allowed for high-voltage distribution and voltage regulation by transformers.
Milking booster
Again in the days of direct current mains, power stations often had large lead-acid batteries for load balancing. These supplemented the steam-powered generators during peak periods and were re-charged off-peak. Sometimes one cell in the battery would become "sick" (faulty, reduced capacity) and a "milking booster" would be used to give it an additional charge and restore it to health. The milking booster was so-called because it "milked" the healthy cells in the battery to give an extra charge to the faulty one. The motor side of the booster was connected across the whole battery but the generator side was connected only across the faulty cell. During discharge periods the booster supplemented the output of the faulty cell.
Reversible booster
Before solid-state technology became available, reversible boosters were sometimes used for speed control in DC electric locomotives. The boosters were called reversible, because they could either increase or decrease the speed of the locomotive.
The motor of the MG set was connected in parallel with the supply, usually at 600 volts, and was mechanically coupled, via a shaft with a heavy flywheel, to the generator. The generator was connected in series with the supply and the traction motors, and its output could be varied between +600 volts, through zero, to -600 volts by adjusting switches and resistors in the field circuit. This allowed the generator voltage to either oppose, or supplement, the line voltage. The net output voltage could therefore be varied smoothly between zero and 1,200 volts as follows:
Generator producing maximum opposing voltage, net output zero volts
Generator producing zero volts, net output 600 volts
Generator producing maximum supplementary voltage, net output 1,200 volts
To match the 1,200 volt output, the locomotive would have three 400 volt traction motors connected in series. Later locomotives had two 600 volt motors in series.
When the locomotive was working at full power, half the energy came through the MG set and the other half came directly from the supply. This meant that the power rating of the MG set needed to be only half the rating of the traction motors. Thus there was a saving in weight and cost compared to the Ward Leonard system, in which the MG set had to be equal in power rating to the traction motors.
If the power supply to the locomotive was interrupted (e.g. because of a gap in the third rail at a junction) the flywheel would power the MG set for a short period to bridge the gap. During this period, the motor of the MG set would temporarily run as a generator. It was this system that was used in the design of British Rail classes 70, 71 and 74 (Class 73 does not utilise booster equipment).
Metadyne
Some types of London Underground stock (e.g. London Underground O Stock) were fitted with Metadynes. These were four-brush electrical machines which differed from the reversible boosters described above.
Television receivers
When cathode ray tubes were the standard for television receivers, after many years of service the tube would lose brightness, due to low electron emission in the electron gun assembly of each tube. A small "booster" transformer could be added to a set experiencing such symptoms; it would raise the voltage applied to the filament slightly, which would increase emission and restore brightness. Sometimes this step would extend the life of the expensive CRT by years, making it more economical than a replacement.
See also
Boost converter
Repeater
References
Electric motors
Electric power conversion
Energy conversion | Booster (electric power) | [
"Technology",
"Engineering"
] | 944 | [
"Electrical engineering",
"Engines",
"Electric motors"
] |
11,278,784 | https://en.wikipedia.org/wiki/Solenopsin | Solenopsin is a lipophilic alkaloid with the molecular formula C17H35N found in the venom of fire ants (Solenopsis). It is considered the primary toxin in the venom and may be the component responsible for the cardiorespiratory failure in people who experience excessive fire ant stings.
Structurally solenopsins are a piperidine ring with a methyl group substitution at position 2 and a long hydrophobic chain at position 6. They are typically oily at room temperature, water-insoluble, and present an absorbance peak at 232 nanometers. Fire ant venom contains other chemically related piperidines which make purification of solenopsin from ants difficult. Therefore, solenopsin and related compounds have been the target of organic synthesis from which pure compounds can be produced for individual study. Originally synthesized in 1993, several groups have designed novel and creative methods of synthesizing enantiopure solenopsin and other alkaloidal components of ant venom.
Total synthesis
The total synthesis of solenopsin has been described by several methods. A proposed method of synthesis(Figure 1) starts with alkylation of 4-chloropyridine with a Grignard reagent derived from 1-bromoundecane, followed by reaction with phenyl chloroformate to form 4-chloro-1-(phenoxycarbonyl)-2-n-undecyl-1,2-dihydropyridine. The phenylcarbamate is converted to the BOC protecting group, and then pyridine is methylated at the 6 position. The pyridine ring is then reduced to a tetrahydropyridine via catalytic hydrogenation with Pd/C and then further reduced with sodium cyanoborohydride to a piperidine ring. The BOC group is finally removed to yield solenopsin. A number of analogs have been synthesized using modifications of this procedure.
A shorter method of synthesis stemming from commercially-available lutidine has been more recently proposed.
Biological activities
Solenopsins are described as toxic against vertebrates and invertebrates. For example, the compound known as isosolenopsin A has been demonstrated to have strong insecticidal effects which may play a central role in the biology of fire ants.
In addition to its toxicity, solenopsis has a number of other biological activities. It inhibits angiogenesis in vitro via the phosphoinositide 3-kinase (PI3K) signaling pathway, inhibits neuronal nitric oxide synthase (nNOS) in a manner that appears to be non-competitive with L-arginine, and inhibits quorum-sensing signaling in some bacteria. The biological activities of solenopsins have led researchers to propose a number of biotechnological and biomedical applications for these compounds. For instance, mentioned anti-bacterial and interference in quorum-sensing signalling apparently provide solenopsins with considerable anti-biofilm activity, which suggests the potential of analogs as new disinfectants and surface-conditioning agents. Also, solenopsins have been demonstrated to inhibit cell division and viability of Trypanosoma cruzi, the cause of Chagas disease, which suggests these alkaloids as potential chemotherapeutic drugs.
Solenopsin and analogs share structural and biological properties with the sphingolipid ceramide, a major endogenous regulator of cell signaling, inducing mitophagy and anti-proliferative effects in different tumor cell lines.
Synthetic analogs of solenopsin are being studied for the potential treatment of psoriasis.
References
Further reading
Piperidine alkaloids
Total synthesis
Toxins | Solenopsin | [
"Chemistry",
"Environmental_science"
] | 790 | [
"Toxicology",
"Piperidine alkaloids",
"Alkaloids by chemical classification",
"Chemical synthesis",
"Total synthesis",
"Toxins"
] |
14,925,389 | https://en.wikipedia.org/wiki/Ras%20superfamily | The Ras superfamily, derived from "Rat sarcoma virus", is a protein superfamily of small GTPases. Members of the superfamily are divided into families and subfamilies based on their structure, sequence and function. The five main families are Ras, Rho, Ran, Rab and Arf GTPases. The Ras family itself is further divided into 6 subfamilies: Ras, Ral, Rap, Rheb, Rad and Rit. Miro is a recent contributor to the superfamily. Each subfamily shares the common core G domain, which provides essential GTPase and nucleotide exchange activity.
The surrounding sequence helps determine the functional specificity of the small GTPase, for example the 'Insert Loop', common to the Rho subfamily, specifically contributes to binding to effector proteins such as WASP.
In general, the Ras family is responsible for cell proliferation: Rho for cell morphology, Ran for nuclear transport, and Rab and Arf for vesicle transport.
Subfamilies and members
The following is a list of human proteins belonging to the Ras superfamily:
Unclassified:
ARHGAP5
DNAJC27
GRLF1
RASEF
See also
Ras subfamily
References
G proteins
Protein superfamilies | Ras superfamily | [
"Chemistry",
"Biology"
] | 259 | [
"G proteins",
"Protein superfamilies",
"Protein classification",
"Signal transduction"
] |
14,932,197 | https://en.wikipedia.org/wiki/Gliese%20682 | Gliese 682 or GJ 682 is a red dwarf. It is listed as the 53rd-nearest known star system to the Sun, being 16.3 light years away from the Earth. Even though it is close by, it is dim with a magnitude of 10.95 and thus requires a telescope to be seen. It is located in the constellation of Scorpius, near the bright star Theta Scorpii.
The star is in a crowded region of sky near the Galactic Center, and so appears to be near a number of deep-sky objects from the Solar System's perspective. The star is only 0.5 degrees from the much more distant globular cluster NGC 6388.
Search for planets
Two candidate planets were detected orbiting Gliese 682 in 2014, one of which would be in the habitable zone. However, a 2020 study did not find these planets and concluded that the radial velocity signals were probably caused by stellar activity.
See also
List of nearest stars and brown dwarfs
References
0682
M-type main-sequence stars
Scorpius
086214
CD-44 11909
TIC objects | Gliese 682 | [
"Astronomy"
] | 238 | [
"Scorpius",
"Constellations"
] |
1,665,878 | https://en.wikipedia.org/wiki/Biocompatibility | Biocompatibility is related to the behavior of biomaterials in various contexts. The term refers to the ability of a material to perform with an appropriate host response in a specific situation. The ambiguity of the term reflects the ongoing development of insights into how biomaterials interact with the human body and eventually how those interactions determine the clinical success of a medical device (such as pacemaker, hip replacement or stent). Modern medical devices and prostheses are often made of more than one material so it might not always be sufficient to talk about the biocompatibility of a specific material.
Since the immune response and repair functions in the body are so complicated it is not adequate to describe the biocompatibility of a single material in relation to a single cell type or tissue. Sometimes one hears of biocompatibility testing that is a large battery of in vitro test that is used in accordance with ISO 10993 (or other similar standards) to determine if a certain material (or rather biomedical product) is biocompatible. These tests do not determine the biocompatibility of a material, but they constitute an important step towards the animal testing and finally clinical trials that will determine the biocompatibility of the material in a given application, and thus medical devices such as implants or drug delivery devices. Research results have concluded that during performing in vitro cytotoxicity testing of biomaterials, "the authors should carefully specify the conditions of the test and comparison of different studies should be carried out with caution".
History
The word biocompatibility seems to have been mentioned for the first time in peer-review journals and meetings in 1970 by RJ Hegyeli (Amer Chem Soc Annual Meeting abstract) and CA Homsy. It took almost two decades before it began to be commonly used in scientific literature (see the graph below).
Recently Williams (again) has been trying to reevaluate the current knowledge status regarding what factors determine clinical success. Doing so notes that an implant may not always have to be positively bioactive but it must not do any harm (either locally or systemically).
Five definitions of biocompatibility
"The quality of not having toxic or injurious effects on biological systems".
"The ability of a material to perform with an appropriate host response in a specific application", Williams' definition.
"Comparison of the tissue response produced through the close association of the implanted candidate material to its implant site within the host animal to that tissue response recognised and established as suitable with control materials" - ASTM
"Refers to the ability of a biomaterial to perform its desired function with respect to a medical therapy, without eliciting any undesirable local or systemic effects in the recipient or beneficiary of that therapy, but generating the most appropriate beneficial cellular or tissue response in that specific situation, and optimising the clinically relevant performance of that therapy".
"Biocompatibility is the capability of a prosthesis implanted in the body to exist in harmony with tissue without causing deleterious changes".
Comments on the above five definitions
The Dorland Medical definition not recommended according to Williams Dictionary since it only defines biocompatibility as the absence of host response and does not include any desired or positive interactions between the host tissue and the biomaterials.
This is also called the “Williams definition” or “William's definition”. It was defined in the European Society for Biomaterials Consensus Conference I and can more easily be found in ‘The Williams Dictionary of Biomaterials’.
The ASTM is not recommended according to Williams Dictionary since it only refers to local tissue responses, in animal models.
The fourth is an expansion or rather more precise version of the first definition noting both that low toxicity and the one should be aware of the different demands between various medical applications of the same material.
All these definitions deal with materials and not with devices. This is a drawback since many medical devices are made of more than one material. Much of the pre-clinical testing of the materials is not conducted on the devices but rather the material itself. But at some stage the testing will have to include the device since the shape, geometry and surface treatment etc. of the device will also affect its biocompatibility.
‘Biocompatible’
In the literature, one quite often stumbles upon the adjective form, ‘biocompatible’. However, according to Williams’ definition, this does not make any sense because biocompatibility is contextual, i.e. much more than just the material itself will determine the clinical outcome of the medical device of which the biomaterial is a part. This also points to one of the weaknesses with the current definition because a medical device usually is made of more than one material.
Metallic glasses based on magnesium with zinc and calcium addition are tested as the potential biocompatible metallic biomaterials for biodegradable medical implants
Biocompatibility (or tissue compatibility) describes the ability of a material to perform with an appropriate host response when applied as intended. A biocompatible material may not be completely "inert"; in fact, the appropriateness of the host response is decisive.
Suggested sub-definitions
The scope of the first definition is so wide that D Williams tried to find suitable subgroups of applications in order to be able to make more narrow definitions. In the MDT article from 2003 the chosen supgroups and their definitions were:
Biocompatibility of long-term implanted devices
The biocompatibility of a long-term implantable medical device refers to the ability of the device to perform its intended function, with the desired degree of incorporation in the host, without eliciting any undesirable local or systemic effects in that host.
Biocompatibility of short-term implantable devices
The biocompatibility of a medical device that is intentionally placed within the cardiovascular system for transient diagnostic or therapeutic purposes refers to the ability of the device to carry out its intended function within flowing blood, with minimal interaction between device and blood that adversely affects device performance, and without inducing uncontrolled activation of cellular or plasma protein cascades.
Biocompatibility of tissue-engineering products
The biocompatibility of a scaffold or matrix for a tissue-engineering products refers to the ability to perform as a substrate that will support the appropriate cellular activity, including the facilitation of molecular and mechanical signalling systems, in order to optimise tissue regeneration, without eliciting any undesirable effects in those cells, or inducing any undesirable local or systemic responses in the eventual host.
In these definitions the notion of biocompatibility is related to devices rather than to materials as compared to top three definitions. There was a consensus conference on biomaterial definitions in Sorrento September 15–16, 2005.
See also
Biocompatible material
Biomaterial
Medical device
ISO 10993
Medical implant
Medical grade silicone
Bovine submaxillary mucin coatings
Titanium biocompatibility
References
Footnotes
Notes
Biomaterials
Surgery | Biocompatibility | [
"Physics",
"Biology"
] | 1,462 | [
"Biomaterials",
"Materials",
"Matter",
"Medical technology"
] |
1,666,522 | https://en.wikipedia.org/wiki/Madden%E2%80%93Julian%20oscillation | The Madden–Julian oscillation (MJO) is the largest element of the intraseasonal (30- to 90-day) variability in the tropical atmosphere. It was discovered in 1971 by Roland Madden and Paul Julian of the American National Center for Atmospheric Research (NCAR). It is a large-scale coupling between atmospheric circulation and tropical deep atmospheric convection. Unlike a standing pattern like the El Niño–Southern Oscillation (ENSO), the Madden–Julian oscillation is a traveling pattern that propagates eastward, at approximately , through the atmosphere above the warm parts of the Indian and Pacific oceans. This overall circulation pattern manifests itself most clearly as anomalous rainfall.
The Madden–Julian oscillation is characterized by an eastward progression of large regions of both enhanced and suppressed tropical rainfall, observed mainly over the Indian and Pacific Ocean. The anomalous rainfall is usually first evident over the western Indian Ocean, and remains evident as it propagates over the very warm ocean waters of the western and central tropical Pacific. This pattern of tropical rainfall generally becomes nondescript as it moves over the primarily cooler ocean waters of the eastern Pacific, but reappears when passing over the warmer waters over the Pacific Coast of Central America. The pattern may also occasionally reappear at low amplitude over the tropical Atlantic and higher amplitude over the Indian Ocean. The wet phase of enhanced convection and precipitation is followed by a dry phase where thunderstorm activity is suppressed. Each cycle lasts approximately 30–60 days. Because of this pattern, the Madden–Julian oscillation is also known as the 30- to 60-day oscillation, 30- to 60-day wave, or intraseasonal oscillation.
Behavior
Distinct patterns of lower-level and upper-level atmospheric circulation anomalies accompany the MJO-related pattern of enhanced or decreased tropical rainfall across the tropics. These circulation features extend around the globe and are not confined to only the eastern hemisphere. The Madden–Julian oscillation moves eastward at between 4 m/s (14 km/h, 9 mph) and 8 m/s (29 km/h, 18 mph) across the tropics, crossing the Earth's tropics in 30 to 60 days—with the active phase of the MJO tracked by the degree of outgoing long wave radiation, which is measured by infrared-sensing geostationary weather satellites. The lower the amount of outgoing long wave radiation, the stronger the thunderstorm complexes, or convection, is within that region.
Enhanced surface (upper level) westerly winds occur near the west (east) side of the active convection. Ocean currents, up to in depth from the ocean surface, follow in phase with the east-wind component of the surface winds. In advance, or to the east, of the MJO enhanced activity, winds aloft are westerly. In its wake, or to the west of the enhanced rainfall area, winds aloft are easterly. These wind changes aloft are due to the divergence present over the active thunderstorms during the enhanced phase. Its direct influence can be tracked poleward as far as 30 degrees latitude from the equator in both northern and southern hemispheres, propagating outward from its origin near the equator at around 1 degree latitude, or , per day.
Irregularities
The MJO's movement around the globe can occasionally slow or stall during the Northern Hemisphere summer and early autumn, leading to consistently enhanced rainfall for one side of the globe and consistently depressed rainfall for the other side. This can also happen early in the year. The MJO can also go quiet for a period of time, which leads to non-anomalous storm activity in each region of the globe.
Local effects
Connection to the monsoon
During the Northern Hemisphere summer season the MJO-related effects on the Indian and West African summer monsoon are well documented. MJO-related effects on the North American summer monsoon also occur, though they are relatively weaker. MJO-related impacts on the North American summer precipitation patterns are strongly linked to meridional (i.e. north–south) adjustments of the precipitation pattern in the eastern tropical Pacific. A strong relationship between the leading mode of intraseasonal variability of the North American Monsoon System, the MJO and the points of origin of tropical cyclones is also present.
A period of warming sea surface temperatures is found five to ten days prior to a strengthening of MJO-related precipitation across southern Asia. A break in the Asian monsoon, normally during the month of July, has been attributed to the Madden–Julian oscillation after its enhanced phase moves off to the east of the region into the open tropical Pacific Ocean.
Influence on tropical cyclogenesis
Tropical cyclones occur throughout the boreal warm season (typically May–November) in both the north Pacific and the north Atlantic basins—but any given year has periods of enhanced or suppressed activity within the season. Evidence suggests that the Madden–Julian oscillation modulates this activity (particularly for the strongest storms) by providing a large-scale environment that is favorable (or unfavorable) for development. MJO-related descending motion is not favorable for tropical storm development. However, MJO-related ascending motion is a favorable pattern for thunderstorm formation within the tropics, which is quite favorable for tropical storm development. As the MJO progresses eastward, the favored region for tropical cyclone activity also shifts eastward from the western Pacific to the eastern Pacific and finally to the Atlantic basin.
An inverse relationship exists between tropical cyclone activity in the western north Pacific basin and the north Atlantic basin, however. When one basin is active, the other is normally quiet, and vice versa. The main reason for this appears to be the phase of the MJO, which is normally in opposite modes between the two basins at any given time. While this relationship appears robust, the MJO is one of many factors that contribute to the development of tropical cyclones. For example, sea surface temperatures must be sufficiently warm and vertical wind shear must be sufficiently weak for tropical disturbances to form and persist. However, the MJO also influences these conditions that facilitate or suppress tropical cyclone formation. The MJO is monitored routinely by both the USA National Hurricane Center and the USA Climate Prediction Center during the Atlantic hurricane (tropical cyclone) season to aid in anticipating periods of relative activity or inactivity.
Influence on African rainfall
The MJO signal is well defined in parts of Africa including in the Congo Basin and East Africa. During the major rainy seasons in East Africa (March to May and October to December), rainfall tends to be lower during when the MJO convective core is over the eastern Pacific, and higher when convection peaks over the Indian Ocean. During 'wet' phases, the normal easterly winds weaken, while during 'dry' phases, the easterly winds strengthen.
An increase in frequency of MJO phases with convective activity over the eastern Pacific might have contributed to the drying trend seen in the Congo Basin in the last few decades.
Downstream effects
Link to the El Niño-Southern oscillation
There is strong year-to-year (interannual) variability in Madden–Julian oscillation activity, with long periods of strong activity followed by periods in which the oscillation is weak or absent. This interannual variability of the MJO is partly linked to the El Niño–Southern Oscillation (ENSO) cycle. In the Pacific, strong MJO activity is often observed 6 to 12 months prior to the onset of an El Niño episode, but is virtually absent during the maxima of some El Niño episodes, while MJO activity is typically greater during a La Niña episode. Strong events in the Madden–Julian oscillation over a series of months in the western Pacific can speed the development of an El Niño or La Niña but usually do not in themselves lead to the onset of a warm or cold ENSO event. However, observations suggest that the 1982-1983 El Niño developed rapidly during July 1982 in direct response to a Kelvin wave triggered by an MJO event during late May. Further, changes in the structure of the MJO with the seasonal cycle and ENSO might facilitate more substantial impacts of the MJO on ENSO. For example, the surface westerly winds associated with active MJO convection are stronger during advancement toward El Niño and the surface easterly winds associated with the suppressed convective phase are stronger during advancement toward La Niña. Globally, the interannual variability of the MJO is most determined by atmospheric internal dynamics, rather than surface conditions.
North American winter precipitation
The strongest impacts of intraseasonal variability on the United States occur during the winter months over the western U.S. During the winter this region receives the bulk of its annual precipitation. Storms in this region can last for several days or more and are often accompanied by persistent atmospheric circulation features. Of particular concern are extreme precipitation events linked to flooding. Strong evidence suggests a link between weather and climate in this region from studies that have related the El Niño Southern Oscillation to regional precipitation variability. In the tropical Pacific, winters with weak-to-moderate cold, or La Niña, episodes or ENSO-neutral conditions are often characterized by enhanced 30- to 60-day Madden–Julian oscillation activity. A recent example is the winter of 1996–1997, which featured heavy flooding in California and in the Pacific Northwest (estimated damage costs of $2.0–3.0 billion at the time of the event) and a very active MJO. Such winters are also characterized by relatively small sea surface temperature anomalies in the tropical Pacific compared to stronger warm and cold episodes. In these winters, there is a stronger link between the MJO events and extreme west coast precipitation events.
Pineapple Express events
The typical scenario linking the pattern of tropical rainfall associated with the MJO to extreme precipitation events in the Pacific Northwest features a progressive (i.e. eastward moving) circulation pattern in the tropics and a retrograding (i.e. westward moving) circulation pattern in the mid latitudes of the North Pacific. Typical wintertime weather anomalies preceding heavy precipitation events in the Pacific Northwest are as follows:
7–10 days prior to the heavy precipitation event: Heavy tropical rainfall associated with the MJO shifts eastward from the eastern Indian Ocean to the western tropical Pacific. A moisture plume extends northeastward from the western tropical Pacific towards the general vicinity of the Hawaiian Islands. A strong blocking anticyclone is located in the Gulf of Alaska with a strong polar jet stream around its northern flank.
3–5 days prior to the heavy precipitation event: Heavy tropical rainfall shifts eastward towards the date line and begins to diminish. The associated moisture plume extends further to the northeast, often traversing the Hawaiian Islands. The strong blocking high weakens and shifts westward. A split in the North Pacific jet stream develops, characterized by an increase in the amplitude and areal extent of the upper tropospheric westerly zonal winds on the southern flank of the block and a decrease on its northern flank. The tropical and extra tropical circulation patterns begin to "phase", allowing a developing mid latitude trough to tap the moisture plume extending from the deep tropics.
The heavy precipitation event: As the pattern of enhanced tropical rainfall continues to shift further to the east and weaken, the deep tropical moisture plume extends from the subtropical central Pacific into the mid latitude trough now located off the west coast of North America. The jet stream at upper levels extends across the North Pacific with the mean jet position entering North America in the northwestern United States. The deep low pressure located near the Pacific Northwest coast can bring up to several days of heavy rain and possible flooding. These events are often referred to as Pineapple Express events, so named because a significant amount of the deep tropical moisture traverses the Hawaiian Islands on its way towards western North America.
Throughout this evolution, retrogression of the large-scale atmospheric circulation features is observed in the eastern Pacific–North American sector. Many of these events are characterized by the progression of the heaviest precipitation from south to north along the Pacific Northwest coast over a period of several days to more than one week. However, it is important to differentiate the individual synoptic-scale storms, which generally move west to east, from the overall large-scale pattern, which exhibits retrogression.
A coherent simultaneous relationship exists between the longitudinal position of maximum MJO-related rainfall and the location of extreme west coast precipitation events. Extreme events in the Pacific Northwest are accompanied by enhanced precipitation over the western tropical Pacific and the region of Southeast Asia called by meteorologists the Maritime Continent, with suppressed precipitation over the Indian Ocean and the central Pacific. As the region of interest shifts from the Pacific Northwest to California, the region of enhanced tropical precipitation shifts further to the east. For example, extreme rainfall events in southern California are typically accompanied by enhanced precipitation near 170°E. However, it is important to note that the overall link between the MJO and extreme west coast precipitation events weakens as the region of interest shifts southward along the west coast of the United States.
There is case-to-case variability in the amplitude and longitudinal extent of the MJO-related precipitation, so this should be viewed as a general relationship only.
Explaining MJO's dynamics with equatorial modons and equatorial adjustment
Eastward propagating structure of barotropic equatorial modon
In 2019, Rostami and Zeitlin reported a discovery of steady, long-living, slowly eastward-moving large-scale coherent twin cyclones, so-called equatorial modons, by means of a moist-convective rotating shallow water model. Crudest barotropic features of MJO such as eastward propagation along the equator, slow phase speed, hydro-dynamical coherent structure, the convergent zone of moist-convection, are captured by Rostami and Zeitlin's modon. Having an exact solution of streamlines for internal and external regions of equatorial asymptotic modon is another feature of this structure. It is shown that such eastward-moving coherent dipolar structures can be produced during geostrophic adjustment of localized large-scale pressure anomalies in the diabatic moist-convective environment on the equator.
Generation of MJO-like structure by geostrophic adjustment in the lower troposphere
In 2020, a study showed that the process of relaxation (adjustment) of localized large-scale pressure anomalies in the lower equatorial troposphere, generates structures strongly resembling the Madden Julian Oscillation (MJO) events, as seen in vorticity, pressure, and moisture fields. Indeed, it is demonstrated that baroclinicity and moist convection substantially change the scenario of the quasi-barotropic "dry" adjustment, which was established in the framework of one-layer shallow water model and consists, in the long-wave sector, in the emission of equatorial Rossby waves, with dipolar meridional structure, to the West, and of equatorial Kelvin waves, to the East. If moist convection is strong enough, a dipolar cyclonic structure, which appears in the process of adjustment as a Rossby-wave response to the perturbation, transforms into a coherent modon-like structure in the lower layer, which couples with a baroclinic Kelvin wave through a zone of enhanced convection and produces, at initial stages of the process, a self-sustained slowly eastward-propagating zonally- dissymmetrical quadrupolar vorticity pattern.
In 2022, Rostami et al advanced their theory. By means of a new multi-layer pseudo-spectral moist-convective Thermal Rotating Shallow Water (mcTRSW) model in a full sphere, they presented a possible equatorial adjustment beyond Gill's mechanism for the genesis and dynamics of the MJO. According to this theory, an eastward propagating MJO-like structure can be generated in a self-sustained and self-propelled manner due to nonlinear relaxation (adjustment) of a large-scale positive buoyancy anomaly, depressed anomaly, or a combination of them, as soon as this anomaly reaches a critical threshold in the presence of moist-convection at the equator. This MJO-like episode possesses a convectively coupled “hybrid structure” that consists of a “quasi equatorial modon”, with an enhanced vortex pair, and a convectively coupled baroclinic Kelvin wave (BKW), with greater phase speed than that of dipolar structure on the intraseasonal time scale. Interaction of the BKW, after circumnavigating all around the equator, with a new large-scale buoyancy anomaly may contribute to excitation of a recurrent generation of the next cycle of MJO-like structure. Overall, the generated "hybrid structure” captures a few of the crudest features of the MJO, including its quadrupolar structure, convective activity, condensation patterns, vorticity field, phase speed, and westerly and easterly inflows in the lower and upper troposphere. Although the moisture-fed convection is a necessary condition for the ``hybrid structure” to be excited and maintained in the proposed theory in this theory, it is fundamentally different from the moisture-mode ones. Because the barotropic equatorial modon and BKW also exist in “dry” environments, while there are no similar “dry” dynamical basic structures in the moisture-mode theories. The proposed theory can be a possible mechanism to explain the genesis and backbone structure of the MJO and to converge some theories that previously seemed divergent.
Impact of climate change on MJO
The MJO travels a stretch of 12,000–20,000 km over the tropical oceans, mainly over the Indo-Pacific warm pool, which has ocean temperatures generally warmer than 28 °C. This Indo-Pacific warm pool has been warming rapidly, altering the residence time of MJO over the tropical oceans. While the total lifespan of MJO remains in the 30–60 day timescale, its residence time has shortened over the Indian Ocean by 3–4 days (from an average of 19 days to 15 days) and increased by 5–6 days over the West Pacific (from an average of 18 days to 23 days). This change in the residence time of MJO has altered the rainfall patterns across the globe.
References
External links
Tropical meteorology
Atmospheric dynamics
Climate oscillations | Madden–Julian oscillation | [
"Chemistry"
] | 3,771 | [
"Atmospheric dynamics",
"Fluid dynamics"
] |
1,666,662 | https://en.wikipedia.org/wiki/Great%20Stink | The Great Stink was an event in Central London during July and August 1858 in which the hot weather exacerbated the smell of untreated human waste and industrial effluent that was present on the banks of the River Thames. The problem had been mounting for some years, with an ageing and inadequate sewer system that emptied directly into the Thames. The miasma from the effluent was thought to transmit contagious diseases, and three outbreaks of cholera before the Great Stink were blamed on the ongoing problems with the river.
The smell, and fears of its possible effects, prompted action from the national and local administrators who had been considering possible solutions for the problem. The authorities accepted a proposal from the civil engineer Joseph Bazalgette to move the effluent eastwards along a series of interconnecting sewers that sloped towards outfalls beyond the metropolitan area. Work on high-, mid- and low-level systems for the new Northern and Southern Outfall Sewers started at the beginning of 1859 and lasted until 1875. To aid the drainage, pumping stations were placed to lift the sewage from lower levels into higher pipes. Two of the more ornate stations, Abbey Mills in Stratford and Crossness on the Erith Marshes, with architectural designs from the consultant engineer, Charles Driver, are listed for protection by English Heritage. Bazalgette's plan introduced the three embankments to London in which the sewers ran—the Victoria, Chelsea and Albert Embankments.
Bazalgette's work ensured that sewage was no longer dumped onto the shores of the Thames and brought an end to the cholera outbreaks; his actions are thought to have saved more lives than the efforts of any other Victorian official. His sewer system operates into the 21st century, servicing a city that has grown to a population of over eight million. The historian Peter Ackroyd argues that Bazalgette should be considered a hero of London.
Background
Brick sewers had been built in London from the 17th century when sections of the Fleet and Walbrook rivers were covered for that purpose. In the century preceding 1856, over a hundred sewers were constructed in London, and at that date the city had around 200,000 cesspits and 360 sewers. Some cesspits leaked methane and other gases, which often caught fire and exploded, while many of the sewers were in a poor state of repair. During the early 19th century improvements had been undertaken in the supply of water to Londoners, and by 1858 many of the city's medieval wooden water pipes were being replaced with iron ones. This, combined with the introduction of flushing toilets and the rising of the city's population from just under one million to three million, led to more water being flushed into the sewers, along with the associated effluent. The outfalls from factories, slaughterhouses and other industrial activities put further strain on the already failing system. Much of this outflow either overflowed, or discharged directly, into the Thames.
The scientist Michael Faraday described the situation in a letter to The Times in July 1855: shocked at the state of the Thames, he dropped pieces of white paper into the river to "test the degree of opacity". His conclusion was that "Near the bridges the feculence rolled up in clouds so dense that they were visible at the surface, even in water of this kind. ... The smell was very bad, and common to the whole of the water; it was the same as that which now comes up from the gully-holes in the streets; the whole river was for the time a real sewer." The smell from the river was so bad that in 1857 the government poured chalk lime, chloride of lime and carbolic acid into the river to ease the stench.
The prevailing thought in Victorian healthcare concerning the transmission of contagious diseases was the miasma theory, which held that most communicable diseases were caused by the inhalation of contaminated air. This contamination could take the form of the odour of rotting corpses or sewage, but also rotting vegetation, or the exhaled breath of someone already diseased. Miasma was believed by most to be the vector of transmission of cholera, which was on the rise in 19th-century Europe. The disease was deeply feared by all, because of the speed with which it could spread, and its high fatality rates.
London's first major cholera epidemic struck in 1831 when the disease claimed 6,536 victims. In 1848–49 there was a second outbreak in which 14,137 London residents died, and this was followed by a further outbreak in 1853–54 in which 10,738 died. During the second outbreak, John Snow, a London-based physician, noticed that the rates of death were higher in those areas supplied by the Lambeth and the Southwark and Vauxhall water companies. In 1849 he published a paper, On the Mode of Communication of Cholera, which posited the theory of the water-borne transmission of disease, rather than the miasma theory; little attention was paid to the paper. Following the third cholera outbreak in 1854, Snow published an update to his treatise, after he focused on the effects in Broad Street, Soho. Snow had removed the handle from the local water pump, thus preventing access to the contaminated water, with a resulting fall in deaths. It was later established that a leaking sewer ran near the well from which the water was drawn.
Local government
The civic infrastructure overseeing the management of London's sewers had gone through several changes in the 19th century. In 1848 the Metropolitan Commission of Sewers (MCS) was established at the urging of the social reformer Edwin Chadwick and a Royal Commission. The Commission superseded seven of the eight authorities that had managed London's sewers since the time of Henry VIII; it was the first time that a unitary power had full control over the capital's sanitation facilities. The Building Act 1844 had ensured that all new buildings had to be connected to a sewer, not a cesspool, and the commission set about connecting cesspools to sewers, or removing them altogether. Because of the fear that the miasma from the sewers would cause the spread of disease, Chadwick and his successor, the pathologist John Simon, ensured that the sewers were regularly flushed through, a policy that resulted in more sewage being discharged into the Thames.
In August 1849 the MCS appointed Joseph Bazalgette to the position of assistant surveyor. He had been working as a consultant engineer in the railway industry until overwork had brought about a serious breakdown in his health; his appointment to the commission was his first position on his return to employment. Working under the chief engineer, Frank Foster, he began to develop a more systematic plan for the city's sewers. The stress of the position was too much for Foster and he died in 1852; Bazalgette was promoted into his position, and continued refining and developing the plans for the development of the sewerage system. The Metropolis Management Act 1855 replaced the commission with the Metropolitan Board of Works (MBW), which took control of the sewers.
By June 1856 Bazalgette completed his definitive plans, which provided for small, local sewers about in diameter to feed into a series of larger sewers until they drained into main outflow pipes high. A Northern and Southern Outfall Sewer were planned to manage the waste for each side of the river. London was mapped into high-, middle- and low-level areas, with a main sewer servicing each; a series of pumping stations was planned to remove the waste towards the east of the city. Bazalgette's plan was based on that of Foster, but was larger in scale, and allowed for more of a rise in population than Foster's – from 3 to 4.5 million. Bazalgette submitted his plans to Sir Benjamin Hall, the First Commissioner of Works. Hall had reservations about the outfalls—the discharge points of waste outlets into other bodies of water—from the sewers, which he said were still within the bounds of the capital, and were therefore unacceptable. During the ongoing discussions, Bazalgette refined and modified his plans, in line with Hall's demands. In December 1856 Hall submitted the plans to a group of three consultant engineers, Captain Douglas Strutt Galton of the Royal Engineers, James Simpson, an engineer with two water companies, and Thomas Blackwood, the chief engineer on the Kennet and Avon Canal. The trio reported back to Hall in July 1857 with proposed changes to the positions of the outfall, which he passed on to the MBW in October. The new proposed discharge points were to be open sewers, running beyond the positions proposed by the board; the cost of their plans was to be over £5.4 million, considerably more than the maximum estimate of Bazalgette's plan, which was £2.4 million. In February 1858 a general election saw the fall of Lord Palmerston's Whig government, which was replaced by Lord Derby's second Conservative ministry; Lord John Manners replaced Hall, and Benjamin Disraeli was appointed Leader of the House of Commons and Chancellor of the Exchequer.
June to August 1858
By mid-1858 the problems with the Thames had been building for several years. In his novel Little Dorrit—published as a serial between 1855 and 1857—Charles Dickens wrote that the Thames was "a deadly sewer ... in the place of a fine, fresh river". In a letter to a friend, Dickens said: "I can certify that the offensive smells, even in that short whiff, have been of a most head-and-stomach-distending nature", while the social scientist and journalist George Godwin wrote that "in parts the deposit is more than six feet deep" on the Thames foreshore, and that "the whole of this is thickly impregnated with impure matter". In June 1858 the temperatures in the shade in London averaged —rising to in the sun. Combined with an extended spell of dry weather, the level of the Thames dropped and raw effluent from the sewers remained on the banks of the river. Queen Victoria and Prince Albert attempted to take a pleasure cruise on the Thames, but returned to shore within a few minutes because the smell was so terrible. The press soon began calling the event "The Great Stink"; the leading article in the City Press observed that "Gentility of speech is at an end—it stinks, and whoso once inhales the stink can never forget it and can count himself lucky if he lives to remember it". A writer for The Standard concurred with the opinion. One of its reporters described the river as a "pestiferous and typhus breeding abomination", while a second wrote that "the amount of poisonous gases which is thrown off is proportionate to the increase of the sewage which is passed into the stream". The leading article in The Illustrated London News commented that:
By June the stench from the river had become so bad that business in Parliament was affected, and the curtains on the river side of the building were soaked in lime chloride to overcome the smell. The measure was not successful, and discussions were held about possibly moving the business of government to Oxford or St Albans. The Examiner reported that Disraeli, on attending one of the committee rooms, left shortly afterwards with the other members of the committee, "with a mass of papers in one hand, and with his pocket handkerchief applied to his nose" because the smell was so bad. The disruption to its legislative work led to questions being raised in the House of Commons. According to Hansard, the Member of Parliament (MP) John Brady informed Manners that members were unable to use either the Committee Rooms or the Library because of the stench, and asked the minister "if the noble Lord has taken any measures for mitigating the effluvium and discontinuing the nuisance". Manners replied that the Thames was not under his jurisdiction. Four days later a second MP said to Manners that "By a perverse ingenuity, one of the noblest of rivers has been changed into a cesspool, and I wish to ask whether Her Majesty's Government intend to take any steps to remedy the evil?" Manners pointed out "that Her Majesty's Government have nothing whatever to do with the state of the Thames". The satirical magazine Punch commented that "The one absorbing topic in both Houses of Parliament ... was the Conspiracy to Poison question. Of the guilt of that old offender, Father Thames, there was the most ample evidence".
At the height of the stink, of lime were being used near the mouths of the sewers that discharged into the Thames, and men were employed spreading lime onto the Thames foreshore at low tide; the cost was £1,500 per week. On 15 June Disraeli tabled the Metropolis Local Management Amendment Bill, a proposed amendment to the 1855 Act; in the opening debate he called the Thames "a Stygian pool, reeking with ineffable and intolerable horrors". The Bill put the responsibility to clear up the Thames on the MBW, and stated that "as far as may be possible" the sewerage outlets should not be within the boundaries of London; it also allowed the Board to borrow £3 million, which was to be repaid from a three-penny levy on all London households for the next forty years. The terms favoured Bazalgette's original 1856 plan, and overcame Hall's objection to it. The leading article in The Times observed that "Parliament was all but compelled to legislate upon the great London nuisance by the force of sheer stench". The bill was debated in late July and was passed into law on 2 August.
Construction
Bazalgette's plans for the of additional street sewers (collecting both effluent and rainwater), which would feed into of main interconnecting sewers, were put out to tender between 1859 and 1865. Four hundred draftsmen worked on the detailed plans and sectional views for the first phase of the building process. There were several engineering challenges to be overcome, particularly the fact that parts of London—including the area around Lambeth and Pimlico—lie below the high-water mark. Bazalgette's plan for the low-level areas was to lift the sewage from low-lying sewers at key points into the mid- and high-level sewers, which would then drain with the aid of gravity, out towards the eastern outfalls at a gradient of .
Bazalgette was a proponent of the use of Portland cement, a material stronger than standard cement, but with a weakness when over-heated. To overcome the problem he instituted a quality control system to test batches of cement, that is described by the historian Stephen Halliday as both "elaborate" and "draconian". The results were fed back to the manufacturers, who altered their production processes to further improve the product. One of the cement manufacturers commented that the MBW were the first public body to use such testing processes. The progress of Bazalgette's works was reported favourably in the press. Paul Dobraszczyk, the architectural historian, describes the coverage as presenting many of the workers "in a positive, even heroic, light", and in 1861 The Observer described the progress on the sewers as "the most expensive and wonderful work of modern times". Construction costs were so high that in July 1863 an additional £1.2 million was lent to the MBW to cover the cost of the work.
Southern drainage system
The southern system, across the less populated suburbs of London, was the smaller and easier part of the system to build. Three main sewers ran from Putney, Wandsworth and Norwood until they linked together in Deptford. At that point a pumping station lifted the effluent into the main outflow sewer, which ran to the Crossness Pumping Station on the Erith Marshes, where it was discharged into the Thames at high tide. The newly built station at Crossness was designed by Bazalgette and a consultant engineer, Charles Driver, a proponent of the use of cast iron as a building material. The building was in a Romanesque style and the interior contains architectural cast ironwork which English Heritage describe as important. The power for pumping the large amount of sewage was provided by four massive beam engines, named Victoria, Prince Consort, Albert Edward and Alexandra, which were manufactured by James Watt and Co.
The station was opened in April 1865 by the Prince of Wales—the future King Edward VII—who officially started the engines. The ceremony, which was attended by other members of royalty, MPs, the Lord Mayor of London and the Archbishops of Canterbury and York, was followed by a dinner for 500 within the building. The ceremony marked the completion of construction of the Southern Outfall Sewers, and the beginning of their operation.
With the successful completion of the southern outflow, one of the board members of the MBW, an MP named Miller, proposed a bonus for Bazalgette. The board agreed and were prepared to pay the engineer £6,000—three times his annual salary—with an additional £4,000 to be shared among his three assistants. Although the idea was subsequently dropped following criticism, Halliday observes that the large amounts discussed "at a time when parsimony was the dominant characteristic of public expenditure is a firm indication of the depth of public interest and approval that appears to have characterised the work."
Northern drainage system
The northern side of the Thames was the more populous, housing two-thirds of London's population, and the works had to proceed through congested streets and overcome such urban hurdles as canals, bridges and railway lines. Work began on the system on 31 January 1859, but the builders encountered numerous problems in construction, including a labourers' strike in 1859–60, hard frosts in winter, and heavier than normal rainfall. The rain was so heavy in June 1862 that an accident occurred at the works re-building the Fleet sewer. The deep excavations were running parallel to the excavation of a cutting at Clerkenwell for the Metropolitan Railway (now the Metropolitan line), and the wall dividing the two trenches collapsed, spilling the waters of the Fleet onto Victoria Street, damaging the gas and water mains.
The high-level sewer—the most northern of the works—ran from Hampstead Heath to Stoke Newington and across Victoria Park, where it joined with the eastern end of the mid-level sewer. The mid-level sewer began in the west at Bayswater and ran along Oxford Street, through Clerkenwell and Bethnal Green, before the connection. This combined main sewer ran to the Abbey Mills Pumping Station in Stratford, where it was joined by the eastern end of the low-level sewer. The pumps at Abbey Mills lifted the effluent from the low-level sewer into the main sewer. This main sewer ran —along what is now known as the Greenway—to the outfall at Beckton.
Like the Crossness Pumping Station, Abbey Mills was a joint design by Bazalgette and Driver. Above the centre of the engine-house was an ornate dome that, Dobraszczyk considers, gives the building a "superficial resemblance ... to a Byzantine church". The architectural historian Nikolaus Pevsner, in his Buildings of England, thought the building showed "exciting architecture applied to the most foul purposes"; he went on to describe it as "an unorthodox mix, vaguely Italian Gothic in style but with tiers of Byzantine windows and a central octagonal lantern that adds a gracious Russian flavour".
To provide the drainage for the low-level sewers, in February 1864 Bazalgette began building three embankments along the shores of the Thames. On the northern side he built the Victoria Embankment, which runs from Westminster to Blackfriars Bridge, and the Chelsea Embankment, running from Millbank to the Cadogan Pier at Chelsea. The southern side contains the Albert Embankment, from the Lambeth end of Westminster Bridge to Vauxhall. He ran the sewers along the banks of the Thames, building up walls on the foreshore, running the sewer pipes inside and infilling around them. The works claimed over of land from the Thames; the Victoria Embankment had the added benefit of relieving the congestion on the pre-existing roads between Westminster and the City of London. The cost of building the embankments was estimated at £1.71 million, of which £450,000 was used for purchasing the necessary river-front properties, which tended to be for light industrial use. The Embankment project was seen as being nationally important and, with the Queen unable to attend because of illness, the Victoria Embankment was opened by the Prince of Wales in July 1870. The Albert Embankment had been completed in November 1869, while the Chelsea Embankment was opened in July 1874.
Bazalgette considered the Embankment project "one of the most difficult and intricate things the ... [MBW] have had to do", and shortly after the Chelsea Embankment was opened, he was knighted. In 1875 the work on the western drainage was completed, and the system became operational. The building work had required 318 million bricks and of concrete and mortar; the final cost was approximately £6.5 million.
Legacy
In 1866 there was a further cholera outbreak in London that claimed 5,596 lives, although it was confined to an area of the East End between Aldgate and Bow. At the time that was a part of London which had not been connected to Bazalgette's system, and 93 per cent of the fatalities occurred within the area. The fault lay with the East London Water Company, who discharged their sewage half a mile () downriver from their reservoir: the sewage was being carried upriver into the reservoir on the incoming tide, contaminating the area's drinking water. The outbreak and the diagnosis of its causes led to the acceptance that cholera was water-borne, not transmitted by miasma. The Lancet, relating details of the investigation into the incident by Dr William Farr, stated that his report "will render irresistible the conclusions at which he has arrived in regard to the influence of the water-supply in causation of the epidemic." It was the last outbreak of the disease in the capital.
In 1878 a Thames pleasure-steamer, the , collided with the collier Bywell Castle and sank, causing over 650 deaths. The accident took place close to the outfalls and questions were raised in the British press over whether the sewage was responsible for some of the deaths. In the 1880s further fears over possible health concerns because of the outfalls led to the MBW purifying sewage at Crossness and Beckton, rather than dumping the untreated waste into the river, and a series of six sludge boats were ordered to ship effluent into the North Sea for dumping. The first boat commissioned in 1887 was named the SS Bazalgette; the procedure remained in service until December 1998, when the dumping stopped and an incinerator was used to dispose of the waste. The sewers were expanded in the late 19th century and again in the early 20th century. The drainage network is, as at 2015, managed by Thames Water, and is used by up to eight million people a day. The company said in 2014 that "the system is struggling to cope with the demands of 21st-century London".
Crossness Pumping Station remained in use until the mid-1950s when it was replaced. The engines were too large to remove and were left in situ, although they fell into a state of disrepair. The station itself became a grade I listed building with the Ministry of Public Building and Works in June 1970 (since replaced by English Heritage). The building and its engines are, , under restoration by the Crossness Engines Trust. The president of the trust is the British television producer Peter Bazalgette, the great-great-grandson of Joseph. As at 2015 part of the Abbey Mill facility continues to operate as a sewage pumping station. The building's large double chimneys were removed during the Second World War following fears that they could be used by the Luftwaffe as landmarks for navigation, and the building became a grade II* listed building with the Ministry of Works in November 1974.
The provision of an integrated and fully functioning sewer system for the capital, together with the associated drop in cholera cases, led the historian John Doxat to state that Bazalgette "probably did more good, and saved more lives, than any single Victorian official". Bazalgette continued to work at the MBW until 1889, during which time he replaced three of London's bridges: Putney in 1886, Hammersmith in 1887 and Battersea in 1890. He was appointed president of the Institution of Civil Engineers (ICE) in 1884, and in 1901 a monument commemorating his life was opened on the Victoria Embankment. When he died in March 1891, his obituarist in The Illustrated London News wrote that Bazalgette's "two great titles to fame are that he beautified London and drained it", while Sir John Coode, the president of ICE at the time, said that Bazalgette's work "will ever remain as monuments to his skill and professional ability". The obituarist for The Times opined that "when the New Zealander comes to London a thousand years hence ... the magnificent solidity and the faultless symmetry of the great granite blocks which form the wall of the Thames-embankment will still remain." He continued, "the great sewer that runs beneath Londoners ... has added some 20 years to their chance of life". The historian Peter Ackroyd, in his history of subterranean London, considers that "with [John] Nash and [Christopher] Wren, Bazalgette enters the pantheon of London heroes" because of his work, particularly the building of the Victoria and Albert Embankments.
See also
1854 Broad Street cholera outbreak
Notes and references
Notes
References
Sources
External links
The Great Stink
1850s in London
1850s in the environment
1850s disasters in the United Kingdom
1858 disasters in Europe
1858 in England
Disasters in London
Environmental disasters in the United Kingdom
Health in London
History of the River Thames
Sewerage
Thames Water
Water pollution in the United Kingdom
Water supply and sanitation in London
July 1858
August 1858 | Great Stink | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 5,431 | [
"Sewerage",
"Environmental engineering",
"Water pollution"
] |
1,667,381 | https://en.wikipedia.org/wiki/Steve%20Lawrence%20%28computer%20scientist%29 | Steve Lawrence is an Australian computer scientist. He was among the group at NEC Research which was responsible for the creation of the Search Engine/Digital Library CiteSeer. He was an employee at Google. He is currently a co-founder & CTO at Xoo.
Lawrence received Bachelor of Science and Bachelor of Engineering degrees from Queensland University of Technology in Australia, and his PhD from the University of Queensland in Australia.
He became a senior research scientist at NEC Research Institute.
He was a senior research scientist at Google where he developed Google Desktop.
Lawrence's professional service includes being program committee co-chair for WWW 2003, program committee vice chair for WWW 2002, co-chair for workshops at AAAI and WWW, a program committee member for conferences including WWW, CIKM, and NIPS, and a reviewer for many journals including Science and Nature.
Lawrence's research interests include information retrieval, digital libraries, and machine learning. He has published over 50 papers in these areas, including articles in Science, Nature, CACM, and IEEE Computer. He has been interviewed by over 100 news organizations including the New York Times, the Wall Street Journal, Washington Post, Reuters, Associated Press, CNN, MSNBC, the BBC, and NPR. Hundreds of articles about his research have appeared worldwide in over 10 different languages.
Awards and honors
NEC Research Institute Excellence awards
NEC Research Impact awards
Queensland University of Technology university medal and award for excellence
ATERB scholarship
APRA priority scholarship,
Technology NJ Internet Innovator award
QEC and Telecom Australia Engineering prizes,
Three prizes in the Australian Mathematics Competition.
External links
The Internet Archive's copy of Dr. Steve Lawrence's old NEC Research homepage
"Online or Invisible?: Free online availability substantially increases a paper's impact" -(open access)
CiteSeer
Living people
Australian computer scientists
Google employees
Queensland University of Technology alumni
University of Queensland alumni
Year of birth missing (living people) | Steve Lawrence (computer scientist) | [
"Technology"
] | 386 | [
"Computing stubs",
"Computer specialist stubs"
] |
1,667,554 | https://en.wikipedia.org/wiki/Magnetic%20semiconductor | Magnetic semiconductors are semiconductor materials that exhibit both ferromagnetism (or a similar response) and useful semiconductor properties. If implemented in devices, these materials could provide a new type of control of conduction. Whereas traditional electronics are based on control of charge carriers (n- or p-type), practical magnetic semiconductors would also allow control of quantum spin state (up or down). This would theoretically provide near-total spin polarization (as opposed to iron and other metals, which provide only ~50% polarization), which is an important property for spintronics applications, e.g. spin transistors.
While many traditional magnetic materials, such as magnetite, are also semiconductors (magnetite is a semimetal semiconductor with bandgap 0.14 eV), materials scientists generally predict that magnetic semiconductors will only find widespread use if they are similar to well-developed semiconductor materials. To that end, dilute magnetic semiconductors (DMS) have recently been a major focus of magnetic semiconductor research. These are based on traditional semiconductors, but are doped with transition metals instead of, or in addition to, electronically active elements. They are of interest because of their unique spintronics properties with possible technological applications. Doped wide band-gap metal oxides such as zinc oxide (ZnO) and titanium oxide (TiO2) are among the best candidates for industrial DMS due to their multifunctionality in opticomagnetic applications. In particular, ZnO-based DMS with properties such as transparency in visual region and piezoelectricity have generated huge interest among the scientific community as a strong candidate for the fabrication of spin transistors and spin-polarized light-emitting diodes, while copper doped TiO2 in the anatase phase of this material has further been predicted to exhibit favorable dilute magnetism.
Hideo Ohno and his group at the Tohoku University were the first to measure ferromagnetism in transition metal doped compound semiconductors such as indium arsenide and gallium arsenide doped with manganese (the latter is commonly referred to as GaMnAs). These materials exhibited reasonably high Curie temperatures (yet below room temperature) that scales with the concentration of p-type charge carriers. Ever since, ferromagnetic signals have been measured from various semiconductor hosts doped with different transition atoms.
Theory
The pioneering work of Dietl et al. showed that a modified Zener model for magnetism
well describes the carrier dependence, as well as anisotropic properties of GaMnAs.
The same theory also
predicted that room-temperature ferromagnetism should exist in heavily p-type doped ZnO and GaN doped by Co and Mn, respectively.
These predictions were followed of a flurry of theoretical and experimental studies of various oxide and nitride semiconductors,
which apparently seemed to confirm room temperature ferromagnetism in nearly any semiconductor or insulator material
heavily doped by transition metal impurities.
However, early Density functional theory (DFT) studies were clouded by band gap errors and overly delocalized defect levels,
and more advanced DFT studies refute most of the previous predictions of ferromagnetism.
Likewise, it has been shown that for most of the oxide based materials studies for magnetic semiconductors
do not exhibit an intrinsic carrier-mediated ferromagnetism as postulated by Dietl et al.
To date, GaMnAs remains the only semiconductor material with robust coexistence of ferromagnetism persisting up to rather high Curie temperatures around 100–200 K.
Materials
The manufacturability of the materials depend on the thermal equilibrium solubility of the dopant in the base material. E.g., solubility of many dopants in zinc oxide is high enough to prepare the materials in bulk, while some other materials have so low solubility of dopants that to prepare them with high enough dopant concentration thermal nonequilibrium preparation mechanisms have to be employed, e.g. growth of thin films.
Permanent magnetization has been observed in a wide range of semiconductor based materials.
Some of them exhibit a clear correlation between carrier density and magnetization,
including the work of
T. Story and co-workers where they demonstrated that the ferromagnetic Curie temperature of Mn2+-doped Pb1−xSnxTe can be controlled by the carrier concentration.
The theory proposed by Dietl required charge carriers in the case of holes to mediate the magnetic coupling of manganese dopants in the prototypical magnetic semiconductor, Mn2+-doped GaAs. If there is an insufficient hole concentration in the magnetic semiconductor, then the Curie temperature would be very low or would exhibit only paramagnetism. However, if the hole concentration is high (>~1020 cm−3), then the Curie temperature would be higher, between 100 and 200 K.
However, many of the semiconductor materials studied exhibit a permanent magnetization extrinsic
to the semiconductor host material.
A lot of the elusive extrinsic ferromagnetism (or phantom ferromagnetism)
is observed in thin films or nanostructured materials.
Several examples of proposed ferromagnetic semiconductor materials are listed below. Notice that many of the observations and/or predictions below remain heavily debated.
Manganese-doped indium arsenide and gallium arsenide (GaMnAs), with Curie temperature around 50–100 K and 100–200 K, respectively
Manganese-doped indium antimonide, which becomes ferromagnetic even at room temperature and even with less than 1% Mn.
Oxide semiconductors
Manganese- and iron-doped indium oxide, ferromagnetic at room temperature. The ferromagnetism appears to be mediated by carrier-electrons, in a similar way as the GaMnAs ferromagnetism is mediated by carrier-holes.
Zinc oxide
Manganese-doped zinc oxide
n-type cobalt-doped zinc oxide
Magnesium oxide:
p-type transparent MgO films with cation vacancies, combining ferromagnetism and multilevel switching (memristor)
Titanium dioxide:
Cobalt-doped titanium dioxide (both rutile and anatase), ferromagnetic above 400 K
Chromium-doped rutile, ferromagnetic above 400 K
Iron-doped rutile and iron-doped anatase, ferromagnetic at room temperature
Copper-doped anatase
Nickel-doped anatase
Tin dioxide
Manganese-doped tin dioxide, with Curie temperature at 340 K
Iron-doped tin dioxide, with Curie temperature at 340 K
Strontium-doped tin dioxide () – Dilute magnetic semiconductor. Can be synthesized an epitaxial thin film on a silicon chip.
Europium(II) oxide, with a Curie temperature of 69K. The curie temperature can be more than doubled by doping (e.g. oxygen deficiency, Gd).
Nitride semiconductors
Chromium doped aluminium nitride
(Ba,K)(Zn,Mn)2As2: Ferromagnetic semiconductor with tetragonal average structure and orthorhombic local structure.
References
External links
Semiconductor material types
Spintronics
Ferromagnetic materials
de:Halbleiter#Semimagnetische Halbleiter | Magnetic semiconductor | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,552 | [
"Spintronics",
"Ferromagnetic materials",
"Semiconductor materials",
"Materials",
"Condensed matter physics",
"Semiconductor material types",
"Matter"
] |
1,670,001 | https://en.wikipedia.org/wiki/International%20Temperature%20Scale%20of%201990 | The International Temperature Scale of 1990 (ITS-90) is an equipment calibration standard specified by the International Committee of Weights and Measures (CIPM) for making measurements on the Kelvin and Celsius temperature scales. It is an approximation of thermodynamic temperature that facilitates the comparability and compatibility of temperature measurements internationally.
It defines fourteen calibration points ranging from to ( to )
and is subdivided into multiple temperature ranges which overlap in some instances.
ITS-90 is the most recent of a series of International Temperature Scales adopted by the CIPM since 1927.
Adopted at the 1989 General Conference on Weights and Measures, it supersedes the International Practical Temperature Scale of 1968 (amended edition of 1975) and the 1976 "Provisional 0.5 K to 30 K Temperature Scale". The CCT has also published several online guidebooks to aid realisations of the ITS-90.
The lowest temperature covered by the ITS-90 is 0.65 K. In 2000, the temperature scale was extended further, to 0.9 mK, by the adoption of a supplemental scale, known as the Provisional Low Temperature Scale of 2000 (PLTS-2000).
In 2019, the kelvin was redefined. However, the alteration was very slight compared to the ITS-90 uncertainties, and so the ITS-90 remains the recommended practical temperature scale without any significant changes. It is anticipated that the redefinition, combined with improvements in primary thermometry methods, will phase out reliance on the ITS-90 and the PLTS-2000 in the future.
Details
The ITS-90 is designed to represent the thermodynamic (absolute) temperature scale (referencing absolute zero) as closely as possible throughout its range. Many different thermometer designs are required to cover the entire range. These include helium vapor pressure thermometers, helium gas thermometers, standard platinum resistance thermometers (known as SPRTs) and monochromatic radiation thermometers.
Although the Kelvin and Celsius temperature scales were (until 2019) defined using the triple point of water ( or ), it is impractical to use this definition at temperatures that are very different from the triple point of water. Accordingly, ITS-90 uses numerous defined points, all of which are based on various thermodynamic equilibrium states of fourteen pure chemical elements and one compound (water). Most of the defined points are based on a phase transition; specifically the melting/freezing point of a pure chemical element. However, the deepest cryogenic points are based exclusively on the vapor pressure/temperature relationship of helium and its isotopes whereas the remainder of its cold points (those less than room temperature) are based on triple points. Examples of other defining points are the triple point of equilibrium hydrogen ( or ) and the freezing point of aluminium ( or ).
The defining fixed points of the ITS-90 refer to pure chemical samples with specific isotopic compositions. As a consequence of this, the ITS-90 contains several equations to correct for temperature variations due to impurities and isotopic composition.
Thermometers calibrated via the ITS-90 use complex mathematical formulas to interpolate between its defined points. The ITS-90 specifies rigorous control over variables to ensure reproducibility from lab to lab. For instance, the small effect that atmospheric pressure has upon the various melting points is compensated for (an effect that typically amounts to no more than half a millikelvin across the different altitudes and barometric pressures likely to be encountered). The standard also compensates for the pressure effect due to how deeply the temperature probe is immersed into the sample. The ITS-90 also draws a distinction between "freezing" and "melting" points. The distinction depends on whether heat is going into (melting) or out of (freezing) the sample when the measurement is made. Only gallium is measured at its melting points; all other metals with defining fixed points on the ITS-90 are measured at their freezing points.
A practical effect of the ITS-90 is that the triple points and the freezing/melting points of its thirteen chemical elements are precisely known for all temperature measurements calibrated per the ITS-90 since these thirteen values are fixed by definition.
Limitations
There are often small differences between measurements calibrated per ITS-90 and thermodynamic temperature. For instance, precise measurements show that the boiling point of VSMOW water under one standard atmosphere of pressure is actually 373.1339 K (99.9839 °C) when adhering strictly to the two-point definition of thermodynamic temperature. When calibrated to ITS-90, where one must interpolate between the defining points of gallium and indium, the boiling point of VSMOW water is about 10 mK less, about 99.974 °C. The virtue of ITS-90 is that another lab in another part of the world will measure the very same temperature with ease due to the advantages of a comprehensive international calibration standard featuring many conveniently spaced, reproducible, defining points spanning a wide range of temperatures.
Although "International Temperature Scale of 1990" has the word "scale" in its title, this is a misnomer that can be misleading. The ITS-90 is not a scale; it is an equipment calibration standard. Temperatures measured with equipment calibrated per ITS-90 may be expressed using any temperature scale such as Celsius, Kelvin, Fahrenheit, or Rankine. For example, a temperature can be measured using equipment calibrated to the kelvin-based ITS-90 standard, and that value may then be converted to, and expressed as, a value on the Fahrenheit scale (e.g. 211.953 °F).
ITS-90 does not address the highly specialized equipment and procedures used for measuring temperatures extremely close to absolute zero. For instance, to measure temperatures in the nanokelvin range (billionths of a kelvin), scientists using optical lattice laser equipment to adiabatically cool atoms, turn off the entrapment lasers and simply measure how far the atoms drift over time to measure their temperature. A cesium atom with a velocity of 7 mm/s is equivalent to a temperature of about 700 nK (which was a record cold temperature achieved by the NIST in 1994).
Estimates of the differences between thermodynamic temperature and the ITS-90 () were published in 2010. It had become apparent that ITS-90 deviated considerably from PLTS-2000 in the overlapping range of 0.65 K to 2 K. To address this, a new 3He vapor pressure scale was adopted, known as .
For higher temperatures, expected values for are below 0.1 mK
for temperatures 4.2 K – 8 K,
up to 8 mK at temperatures close to 130 K,
to 0.1 mK at the triple point of water (273.1600 K),
but rising again to 10 mK at temperatures close to 430 K, and reaching 46 mK at temperatures close to 1150 K.
Standard interpolating thermometers and their ranges
Defining points
The table below lists the defining fixed points of the ITS-90.
See also
Thermodynamic (absolute) temperature — the "true temperature" which ITS-90 is attempting to approximate.
Provisional Low Temperature Scale of 2000 (PLTS-2000) — A newer temperature scale for the range of 0.0009 K to 1 K, based on the melting pressure of helium-3.
Kelvin
Triple point
Vienna Standard Mean Ocean Water (VSMOW)
Resistance thermometer
Platinum resistance thermometer
Planckian locus § International Temperature Scale – how successive revisions of the temperature scale have affected the relation between spectrum and temperature of a black body
References
Preston-Thomas H., Metrologia, 1990, 27(1), 3-10 (amended version).
External links
The Internet ITS-90 Resource (by ISOTech Ltd)
ITS-90 (by Swedish National Testing and Research Institute)
About Temperature Sensors (information repository)
NIST ITS-90 Thermocouple Database (by United States Department of Commerce, National Institute of Standards & Technology)
Conversion among different international temperature scales; equations and algorithms.
CIPM official publication of ITS-90 in 78th meeting in 1989
Temperature | International Temperature Scale of 1990 | [
"Physics",
"Chemistry"
] | 1,699 | [
"Scalar physical quantities",
"Thermodynamic properties",
"Temperature",
"Physical quantities",
"SI base quantities",
"Intensive quantities",
"Thermodynamics",
"Wikipedia categories named after physical quantities"
] |
9,094,695 | https://en.wikipedia.org/wiki/Honey%20Honey%20no%20Suteki%20na%20Bouken | is a shōjo manga by Hideko Mizuno first published in 1968 and made into a 29-episode anime television series in 1981 by Kokusai Eiga-sha and animated by Toei Animation. The anime was released in the English language in the United States in 1984 as Honey Honey. It was also broadcast in various European countries and in Latin America.
Story
The story begins in the city of Vienna in 1907, as the city holds a lavish birthday celebration for its beloved Princess Flora. The princess entertains a variety of suitors from around the world who have come to propose marriage. Also on hand for the celebration is Phoenix, a handsome, suave jewel thief who has his eye on stealing the princess's precious gemstone, the "Smile of the Amazon," which the princess wears as a ring. Phoenix remarks that Flora, though renowned worldwide for her great beauty, wouldn't be nearly as beautiful without her ring. Furious at the insult, Princess Flora then drops the ring holding her gemstone into a cooked fish and throws it out the window, to the shock of everyone present.
Meanwhile, a young teenaged orphan named Honey Honey is working as a waitress. Her pet cat and constant companion, Lily, spies the fish that the princess threw out the window and proceeds to eat the entire fish, thus swallowing the ring. Since Princess Flora has declared that whichever of her suitors successfully returns the ring to her shall become her husband, the princess's suitors and Phoenix immediately fan out all over the city pursuing Lily and her owner, Honey Honey.
Phoenix catches up with Honey and Lily and helps them hide from their pursuers. Honey proceeds to tell the handsome jewel thief the story of how she was orphaned and brought up in a convent, and of how she befriended Lily, who, like her, was abandoned. Phoenix then tries to persuade Honey to sell Lily to him, but Honey, who is still unaware that Lily has swallowed the princess's ring, furiously refuses (after driving Phoenix's initial offer of a million dollars up to ten million dollars, and then coming to her senses) and immediately flees. Honey and Lily hide in the basket of a hot-air balloon, which soon lifts off with Phoenix and Princess Flora's suitors still in hot pursuit. Thus begins an adventure in which Honey and Lily are pursued by Phoenix, Princess Flora, and her suitors to various cities around the world, including Paris, New York City, Oslo, London, Monte Carlo, Tokyo, and Gibraltar. Along the way, Honey falls in love with Phoenix and tries to keep Lily out of the clutches of the selfish, vain princess.
Eventually, Flora's ring is removed from Lily's body, but this is not the end of the story. The series concludes with the discovery that Honey Honey is in fact Flora's younger sister, a princess of a very tiny Prussian country whose name is actually Priscilla. When she was captured by a nomadic people, who force young women to walk bare feet over a pit of hot coals to see if any one of them have revealed a tattoo of a rose on her foot, Honey Honey turned out to have that tattoo that they were searching for so long. Honey Honey is captured by the evil Slag, a man who caused her kingdom to be destroyed after going through an agonizing defeat. Slag takes Honey Honey into his castle within the Siberian wastelands. Honey Honey manages to escape with help from Phoenix, after he somehow managed to help a mystical alien with a flying saucer (which sort of explains the Tunguska event in 1908).
Sometime later Honey Honey is reunited with her father, who is working as a gardener for some Russian aristocrats in Moscow. Honey Honey, her father and Phoenix escape, but they are sold into slavery at Constantinopole. Honey Honey is bought by an Indian Sultan who is obsessed with magic tricks. Honey Honey gets her hands onto the Sultan's magic carpet, which she uses to fly into Japan and Los Angeles and finally to the show's climax in New York. At New York Princess Flora, her suitors, Phoenix and Honey Honey's father have managed to get into the city as well. The final episode also involves King Kong to be shown in the series, after it captures Princess Flora when it manages to break free from the show held by the Princess' suitors. Honey Honey manages to save Flora from King Kong by using her kind heart and sympathy. Honey Honey finally learns why everyone wanted Lily, and makes peace with Flora after their sisterhood is revealed at New York in the final episode. Honey Honey also marries Phoenix (the son of the nomadic tribe's leader, actually). Also in the end of the final episode, Princess Flora tosses the ring from the window a second time. Her servants start to chase a small dog, who swallowed the ring this time, by accident. However, they don't know that the ring was actually a fake.
Anime distribution
The anime version of Honey Honey, produced by Kokusai Eiga-sha (Movie International Company) and animated by Toei Animation, lasted 29 half-hour episodes and was broadcast in Tokyo on Fuji TV and on affiliated stations KTV in Osaka and Hokkaido Cultural Broadcasting Saturdays at 18:00 local time from October 1981 to May 1982. As the show was scheduled in non-network air time, other stations in the Fuji (CX) network were not obligated to carry it, and in other markets the show aired in different time slots or on non-Fuji affiliated stations. Minori Matsushima (Sayaka Yumi in Mazinger Z and Candice in Candy Candy) provided the voice of Honey Honey, Fuyumi Shiraishi (Mirai Yashima in Mobile Suit Gundam) voiced Princess Flora, and Phoenix was voiced by Makio Inoue. Masaki Tsuji, previously a writer for the unrelated Cutie Honey and numerous other works (anime and otherwise), was the series' head writer, and Takeshi Shirado, a Toei veteran whose credits included Cutie Honey, Mazinger Z, Devilman and Space Battleship Yamato, served as series director. Youko Seri, a pop singer who was popular throughout Asia at the time (particularly in China), sang the series' opening and ending themes.
The anime aired in the same Saturday-evening time slot as several popular Super Sentai shows, and achieved low ratings as a result; thus, the series was canceled early, and several manga storylines, including one in which Honey Honey visits Hollywood, did not appear in the anime. However, the TV series would later achieve a fair amount of success in Europe and Latin America (although many of the character names were changed in the non-English dubs; i.e., Honey Honey is Pollen in French, Fiorellino in the first Italian dub, Favos de Mel in Portuguese, and Silvia in Spanish).
Dubbed into English by Sound International Corporation, with distribution by Modern Programs International, the anime series aired, uncut, in the United States on the CBN Cable Network (later known as The Family Channel) in 1984 and was also partially released on home video by Sony. Enoki Films U.S. currently holds the American license to the anime series.
Anime staff
Original Story
Hideko Mizuno
Executive Producer
Juzo Tsubota
Chief Directors
Takeshi Shirato, Yoshikata Nitta
Episode Directors
Takeshi Shirato, Minoru Hamada, Shoichi Yasumura, Hiromi Yamamoto, Yoshikata Nitta, Kozo Takagaki, Hiromichi Matano, Kazuya Miyazaki, Eikichi Kojika
Script
Masaki Tsuji, Shun'ichi Yukimuro, Toyohiro Ando, Naoko Miyake
Character Designs
Kozo Masanobe
Animation Directors
Takeshi Shirato, Akira Daikuhara, Akira Shinoda, Joji Kikuchi, Eiji Uemura, Hiroshi Iino, Nobumichi Kawamura
Art Director
Yoshiyuki Yamamoto
Music
Akihiro Komori
Theme Song
OP- Hāto wa Ōsuwagi, ED- Niji ni Shōjo, performed by Youko Seri
Production
Kokusai Eiga-sha (Movie International Co., Ltd.) / Toei Animation / Fuji TV
References
External links
Enoki Films' Honey Honey anime page (English)
1966 manga
1981 anime television series debuts
1982 Japanese television series endings
Adventure anime and manga
Animated television series about princesses
Anime and manga set in France
Anime and manga set in London
Anime and manga set in New York City
Anime and manga set in Tokyo
Anime series based on manga
Drama anime and manga
Fiction about orphans
Fiction about siblings
Fiction set in 1907
Fiction set in 1908
Fuji Television original programming
Historical anime and manga
Romance anime and manga
Shōjo manga
Television series set in the 1900s
Television shows set in Gibraltar
Television shows set in Paris
Television shows set in Monaco
Television shows set in Oslo
Television shows set in Vienna
Toei Animation television
Tunguska event | Honey Honey no Suteki na Bouken | [
"Physics"
] | 1,841 | [
"Unsolved problems in physics",
"Tunguska event"
] |
9,094,822 | https://en.wikipedia.org/wiki/Ocean%20bank | An ocean bank, sometimes referred to as a fishing bank or simply bank, is a part of the seabed that is shallow compared to its surrounding area, such as a shoal or the top of an underwater hill. Somewhat like continental slopes, ocean bank slopes can upwell as tidal and other flows intercept them, sometimes resulting in nutrient-rich currents. Because of this, some large banks, such as Dogger Bank and the Grand Banks of Newfoundland, are among the richest fishing grounds in the world.
There are some banks that were reported in the 19th century by navigators, such as Wachusett Reef, whose existence is doubtful.
Types
Ocean banks may be of volcanic nature. Banks may be carbonate or terrigenous. In tropical areas some banks are submerged atolls. As they are not associated with any landmass, banks have no outside source of sediments.
Carbonate banks are typically platforms, rising from the ocean depths, whereas terrigenous banks are elevated sedimentary deposits.
Seamounts, by contrast, are mountains rising from the deep sea and are steeper and higher in comparison to the surrounding seabed. Examples of these are Pioneer and Guide Seamounts, west of the Farallon Islands. The Pioneer Seamount has a depth of 1,000 meters, In other cases, parts of a bank may reach above the water surface, thereby forming islands.
Prominent banks
The largest banks in the world are:
Grand Banks of Newfoundland (280,000 km2) - terrigenous bank
Agulhas Bank (116,000 km2)
Great Bahama Bank (95,798.12 km2, has islands, area without islands)
Saya de Malha (35,000 km2, excluding the separate North bank, least depth 7 m)
Seychelles Bank (31,000 km2, including islands of 266 km2)
Georges Bank (28,800 km2) - terrigenous bank
Lansdowne Bank (4,300 km2, west of New Caledonia, least depth 3.7 m)
Dogger Bank (17,600 km2, least depth 13 m)
Little Bahama Bank (14,260.64 km2, has islands, area without islands)
Great Chagos Bank (12,642 km2, including islands of 4.5 km2)
Reed Bank, Spratly Islands (8,866 km2, least depth 9 m)
Caicos Bank, Caicos Islands (7,680 km2, including islands of 589.5 km2)
Macclesfield Bank (6,448 km2, least depth 9.2 m)
North Bank or Ritchie Bank (5,800 km2, north of Saya de Malha, least depth <10 m)
Cay Sal Bank (5,226.73 km2, including islands of 14.87 km2)
Rosalind Bank (4,500 km2, least depth 7.3 m)
Bassas de Pedro (2,474.33 km2, least depth 16.4 m), part of the Amindivi Subgroup of Lakshadweep, India
See also
Oceanic plateau
Carbonate platform
Placer (geography)
Notes
External links
Definitions – Islands, Banks & Seamounts: Geologic Features Under the Sea
Physical oceanography | Ocean bank | [
"Physics"
] | 666 | [
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
9,095,203 | https://en.wikipedia.org/wiki/Antagonism%20%28chemistry%29 | Chemical antagonists impede the normal function of a system. They function to invert the effects of other molecules. The effects of antagonists can be seen after they have encountered an agonist, and as a result, the effects of the agonist is neutralized. Antagonists such as dopamine antagonist slow down movement in lab rats. Although they hinder the joining of enzymes to substrates, Antagonists can be beneficial. For example, not only do angiotensin receptor blockers, and angiotensin-converting enzyme (ACE) inhibitors work to lower blood pressure, but they also counter the effects of renal disease in diabetic and non-diabetic patients. Chelating agents, such as calcium di sodium defeated, fall into the category of antagonists and operate to minimize the lethal effects of heavy metals such as mercury or lead.
In chemistry, antagonism is a phenomenon wherein two or more agents in combination have an overall effect that is less than the sum of their individual effects.
The word is most commonly used in this context in biochemistry and toxicology: interference in the physiological action of a chemical substance by another having a similar structure. For instance, a receptor antagonist is an agent that reduces the response that a ligand produces when the receptor antagonist binds to a receptor on a cell. An example of this is the interleukin-1 receptor antagonist. The opposite of antagonism is synergy. It is a negative type of synergism.
Experiments with different combinations show that binary mixtures of phenolics can lead to either a synergetic antioxidant effect or to an antagonistic effect.
References
Toxicology
Receptor antagonists | Antagonism (chemistry) | [
"Chemistry",
"Environmental_science"
] | 343 | [
"Neurochemistry",
"Receptor antagonists",
"Toxicology stubs",
"Toxicology"
] |
9,095,952 | https://en.wikipedia.org/wiki/Flutter%20valve | In respiratory medicine, a flutter valve (also Pneumostat valve, and Heimlich valve) is a one-way check valve used to prevent airflow back into a chest tube, and usually is applied to drain air from a pneumothorax. The design of the flutter valve features a rubber sleeve in a plastic case, where the rubber sleeve is arranged so that when air flows through the valve the sleeve opens and allows the outwards airflow from the body of the patient; however, when the airflow is reversed, the rubber sleeve closes and halts backwards airflow into the body of the patient.
The construction of the flutter valve enables it to function as a one-way valve allowing airflow, or the flow of a fluid, in only one direction along the drainage tube. The end of the drainage tube is placed inside the chest cavity of the patient — into the air mass or into the fluid mass to be drained from the thorax. The flutter valve is placed in the appropriate orientation (designed so that the valve can only be connected in the appropriate orientation) and the pneumothorax is thus evacuated from the chest of the patient.
Usage of the flutter valve presents potential problems such as clogging of the chest tube, which might provoke the recurrence of the pneumothorax or the subcutaneous emphysema, which can lead to empyema. Another potential problem leaks of fluid, which are resolved with a small chest-drain; or with a sputum-trap attached to the valve, to function as a reservoir of the draining fluid. Flutter valves (Pneumostat valves) allow patients to ambulate more easily and patients may be able leave the hospital in certain instances. The traditional chest tube collection box often requires a longer hospital stay. Additional to the Heimlich valve, a chest-drainage management system, which typically enables the application of vacuum, and the quantification of the effluent; however, a drainage-management system is much a larger apparatus with more tubing, which encumbers the patient.
See also
References
External links
bd_bardparker_heimlich_chest_drain_valve_brochure.pdf from BD Bard Parker(tm)
Heimlich Flutter Valve
One way valve for chest drains from www.freepatentsonline.com
Illustration of Heimlich flutter valve from Netter Medical Illustrations (the blue tubular valve)
Heimlich Valve as part of a pneumothorax kit from Emergency Medical Products
Use of Heimlich Valve from www.freepatentsonline.com
Pulmonology
Valves | Flutter valve | [
"Physics",
"Chemistry"
] | 549 | [
"Physical systems",
"Valves",
"Hydraulics",
"Piping"
] |
9,097,185 | https://en.wikipedia.org/wiki/Technetium-99m | Technetium-99m (99mTc) is a metastable nuclear isomer of technetium-99 (itself an isotope of technetium), symbolized as 99mTc, that is used in tens of millions of medical diagnostic procedures annually, making it the most commonly used medical radioisotope in the world.
Technetium-99m is used as a radioactive tracer and can be detected in the body by medical equipment (gamma cameras). It is well suited to the role, because it emits readily detectable gamma rays with a photon energy of 140 keV (these 8.8 pm photons are about the same wavelength as emitted by conventional X-ray diagnostic equipment) and its half-life for gamma emission is 6.0058 hours (meaning 93.7% of it decays to 99Tc in 24 hours). The relatively "short" physical half-life of the isotope and its biological half-life of 1 day (in terms of human activity and metabolism) allows for scanning procedures which collect data rapidly but keep total patient radiation exposure low. The same characteristics make the isotope unsuitable for therapeutic use.
Technetium-99m was discovered as a product of cyclotron bombardment of molybdenum. This procedure produced molybdenum-99, a radionuclide with a longer half-life (2.75 days), which decays to 99mTc. This longer decay time allows for 99Mo to be shipped to medical facilities, where 99mTc is extracted from the sample as it is produced. In turn, 99Mo is usually created commercially by fission of highly enriched uranium in a small number of research and material testing nuclear reactors in several countries.
History
Discovery
In 1938, Emilio Segrè and Glenn T. Seaborg isolated for the first time the metastable isotope technetium-99m, after bombarding natural molybdenum with 8 MeV deuterons in the cyclotron of Ernest Orlando Lawrence's Radiation laboratory. In 1970 Seaborg explained that:
Later in 1940, Emilio Segrè and Chien-Shiung Wu published experimental results of an analysis of fission products of uranium-235, including molybdenum-99, and detected the presence of an isomer of element 43 with a 6-hour half life, later labelled as technetium-99m.
Early medical applications in the United States
99mTc remained a scientific curiosity until the 1950s when Powell Richards realized the potential of technetium-99m as a medical radiotracer and promoted its use among the medical community. While Richards was in charge of the radioisotope production at the Hot Lab Division of the Brookhaven National Laboratory, Walter Tucker and Margaret Greene were working on how to improve the separation process purity of the short-lived eluted daughter product iodine-132 from its parent, tellurium-132 (with a half life of 3.2 days), produced in the Brookhaven Graphite Research Reactor. They detected a trace contaminant which proved to be 99mTc, which was coming from 99Mo and was following tellurium in the chemistry of the separation process for other fission products. Based on the similarities between the chemistry of the tellurium-iodine parent-daughter pair, Tucker and Greene developed the first technetium-99m generator in 1958. It was not until 1960 that Richards became the first to suggest the idea of using technetium as a medical tracer.
The first US publication to report on medical scanning of 99mTc appeared in August 1963. Sorensen and Archambault demonstrated that intravenously injected carrier-free 99Mo selectively and efficiently concentrated in the liver, becoming an internal generator of 99mTc. After build-up of 99mTc, they could visualize the liver using the 140 keV gamma ray emission.
Worldwide expansion
The production and medical use of 99mTc rapidly expanded across the world in the 1960s, benefiting from the development and continuous improvements of the gamma cameras.
Americas
Between 1963 and 1966, numerous scientific studies demonstrated the use of 99mTc as radiotracer or diagnostic tool. As a consequence the demand for 99mTc grew exponentially and by 1966, Brookhaven National Laboratory was unable to cope with the demand. Production and distribution of 99mTc generators were transferred to private companies. "TechneKow-CS generator", the first commercial 99mTc generator, was produced by Nuclear Consultants, Inc. (St. Louis, Missouri) and Union Carbide Nuclear Corporation (Tuxedo, New York). From 1967 to 1984, 99Mo was produced for Mallinckrodt Nuclear Company at the Missouri University Research Reactor (MURR).
Union Carbide actively developed a process to produce and separate useful isotopes like 99Mo from mixed fission products that resulted from the irradiation of highly enriched uranium (HEU) targets in nuclear reactors developed from 1968 to 1972 at the Cintichem facility (formerly the Union Carbide Research Center built in the Sterling forest in Tuxedo, New York ()). The Cintichem process originally used 93% highly enriched U-235 deposited as UO2 on the inside of a cylindrical target.
At the end of the 1970s, of total fission product radiation were extracted weekly from 20 to 30 reactor bombarded HEU capsules, using the so-called "Cintichem [chemical isolation] process." The research facility with its 1961 5-MW pool-type research reactor was later sold to Hoffman-LaRoche and became Cintichem Inc. In 1980, Cintichem, Inc. began the production/isolation of 99Mo in its reactor, and became the single U.S. producer of 99Mo during the 1980s. However, in 1989, Cintichem detected an underground leak of radioactive products that led to the reactor shutdown and decommissioning, putting an end to the commercial production of 99Mo in the USA.
The production of 99Mo started in Canada in the early 1970s and was shifted to the NRU reactor in the mid-1970s. By 1978 the reactor provided technetium-99m in large enough quantities that were processed by AECL's radiochemical division, which was privatized in 1988 as Nordion, now MDS Nordion. In the 1990s a substitution for the aging NRU reactor for production of radioisotopes was planned. The Multipurpose Applied Physics Lattice Experiment (MAPLE) was designed as a dedicated isotope-production facility. Initially, two identical MAPLE reactors were to be built at Chalk River Laboratories, each capable of supplying 100% of the world's medical isotope demand. However, problems with the MAPLE 1 reactor, most notably a positive power co-efficient of reactivity, led to the cancellation of the project in 2008.
The first commercial 99mTc generators were produced in Argentina in 1967, with 99Mo produced in the CNEA's RA-1 Enrico Fermi reactor. Besides its domestic market CNEA supplies 99Mo to some South American countries.
Oceania
In 1967, the first 99mTc procedures were carried out in Auckland, New Zealand. 99Mo was initially supplied by Amersham, UK, then by the Australian Nuclear Science and Technology Organisation (ANSTO) in Lucas Heights, Australia.
Europe
In May 1963, Scheer and Maier-Borst were the first to introduce the use of 99mTc for medical applications.
In 1968, Philips-Duphar (later Mallinckrodt, today Covidien) marketed the first technetium-99m generator produced in Europe and distributed from Petten, the Netherlands.
Shortage
Global shortages of technetium-99m emerged in the late 2000s because two aging nuclear reactors (NRU and HFR) that provided about two-thirds of the world's supply of molybdenum-99, which itself has a half-life of only 66 hours, were shut down repeatedly for extended maintenance periods. In May 2009, the Atomic Energy of Canada Limited announced the detection of a small leak of heavy water in the NRU reactor that remained out of service until completion of the repairs in August 2010.
After the observation of gas bubble jets released from one of the deformations of primary cooling water circuits in August 2008, the HFR reactor was stopped for a thorough safety investigation. NRG received in February 2009 a temporary license to operate HFR only when necessary for medical radioisotope production. HFR stopped for repairs at the beginning of 2010 and was restarted in September 2010.
Two replacement Canadian reactors (see MAPLE Reactor) constructed in the 1990s were closed before beginning operation, for safety reasons. A construction permit for a new production facility to be built in Columbia, MO was issued in May 2018.
Nuclear properties
Technetium-99m is a metastable nuclear isomer, as indicated by the "m" after its mass number 99. This means it is a nuclide in an excited (metastable) state that lasts much longer than is typical. The nucleus will eventually relax (i.e., de-excite) to its ground state through the emission of gamma rays or internal conversion electrons. Both of these decay modes rearrange the nucleons without transmuting the technetium into another element.
99mTc decays mainly by gamma emission, slightly less than 88% of the time. (99mTc → 99Tc + γ) About 98.6% of these gamma decays result in 140.5 keV gamma rays and the remaining 1.4% are to gammas of a slightly higher energy at 142.6 keV. These are the radiations that are picked up by a gamma camera when 99mTc is used as a radioactive tracer for medical imaging. The remaining approximately 12% of 99mTc decays are by means of internal conversion, resulting in ejection of high speed internal conversion electrons in several sharp peaks (as is typical of electrons from this type of decay) also at about 140 keV (99mTc → 99Tc+ + e−). These conversion electrons will ionize the surrounding matter like beta radiation electrons would do, contributing along with the 140.5 keV and 142.6 keV gammas to the total deposited dose.
Pure gamma emission is the desirable decay mode for medical imaging because other particles deposit more energy in the patient body (radiation dose) than in the camera. Metastable isomeric transition is the only nuclear decay mode that approaches pure gamma emission.
99mTc's half-life of 6.0058 hours is considerably longer (by 14 orders of magnitude, at least) than most nuclear isomers, though not unique. This is still a short half-life relative to many other known modes of radioactive decay and it is in the middle of the range of half lives for radiopharmaceuticals used for medical imaging.
After gamma emission or internal conversion, the resulting ground-state technetium-99 then decays with a half-life of 211,000 years to stable ruthenium-99. This process emits soft beta radiation without a gamma. Such low radioactivity from the daughter product(s) is a desirable feature for radiopharmaceuticals.
^{99\!m}_{43}Tc ->[\ce{\gamma\ 141 keV}][\ce{6 h}] {}^{99}_{43}Tc ->[\ce{\beta^-\ 249 keV}][211,000\ \ce{y}] \overbrace{\underset{(stable)}{^{99}_{44}Ru}}^{ruthenium-99}
Production
Production of 99Mo in nuclear reactors
Neutron irradiation of uranium-235 targets
The parent nuclide of 99mTc, 99Mo, is mainly extracted for medical purposes from the fission products created in neutron-irradiated uranium-235 targets, the majority of which is produced in five nuclear research reactors around the world using highly enriched uranium (HEU) targets. Smaller amounts of 99Mo are produced from low-enriched uranium in at least three reactors.
Neutron activation of 98Mo
Production of 99Mo by neutron activation of natural molybdenum, or molybdenum enriched in 98Mo, is another, currently smaller, route of production.
Production of 99mTc/99Mo in particle accelerators
Production of "Instant" 99mTc
The feasibility of 99mTc production with the 22-MeV-proton bombardment of a 100Mo target in medical cyclotrons was demonstrated in 1971. The recent shortages of 99mTc reignited the interest in the production of "instant" 99mTc by proton bombardment of isotopically enriched 100Mo targets (>99.5%) following the reaction 100Mo(p,2n)99mTc. Canada is commissioning such cyclotrons, designed by Advanced Cyclotron Systems, for 99mTc production at the University of Alberta and the Université de Sherbrooke, and is planning others at the University of British Columbia, TRIUMF, University of Saskatchewan and Lakehead University.
A particular drawback of cyclotron production via (p,2n) on 100Mo is the significant co-production of 99gTc. The preferential in-growth of this nuclide occurs due to the larger reaction cross-section pathway leading to the ground state, which is almost five times higher at the cross-section maximum in comparison with the metastable one at the same energy. Depending on the time required to process the target material and recovery of 99mTc, the amount of 99mTc relative to 99gTc will continue to decrease, in turn reducing the specific activity of 99mTc available. It has been reported that ingrowth of 99gTc as well as the presence of other Tc isotopes can negatively affect subsequent labelling and/or imaging; however, the use of high purity 100Mo targets, specified proton beam energies, and appropriate time of use have shown to be sufficient for yielding 99mTc from a cyclotron comparable to that from a commercial generator. Liquid metal molybdenum-containing targets have been proposed that would aid in streamlined processing, ensuring better production yields. A particular problem associated with the continued reuse of recycled, enriched 100Mo targets is unavoidable transmutation of the target as other Mo isotopes are generated during irradiation and cannot be easily removed post-processing.
Indirect routes of production of 99Mo
Other particle accelerator-based isotope production techniques have been investigated. The supply disruptions of 99Mo in the late 2000s and the ageing of the producing nuclear reactors forced the industry to look into alternative methods of production. The use of cyclotrons or electron accelerators to produce 99Mo from 100Mo via (p,pn) or (γ,n) reactions, respectively, has been further investigated. The (n,2n) reaction on 100Mo yields a higher reaction cross-section for high energy neutrons than of (n,γ) on 98Mo with thermal neutrons. In particular, this method requires accelerators that generate fast neutron spectrums, such as ones using D-T or other fusion-based reactions, or high energy spallation or knock out reactions. A disadvantage of these techniques is the necessity for enriched 100Mo targets, which are significantly more expensive than natural isotopic targets and typically require recycling of the material, which can be costly, time-consuming, and arduous.
Technetium-99m generators
Technetium-99m's short half-life of 6 hours makes storage impossible and would make transport very expensive. Instead, its parent nuclide 99Mo is supplied to hospitals after its extraction from the neutron-irradiated uranium targets and its purification in dedicated processing facilities. It is shipped by specialised radiopharmaceutical companies in the form of technetium-99m generators worldwide or directly distributed to the local market. The generators, colloquially known as moly cows, are devices designed to provide radiation shielding for transport and to minimize the extraction work done at the medical facility. A typical dose rate at 1 metre from the 99mTc generator is 20-50 μSv/h during transport. These generators' output declines with time and must be replaced weekly, since the half-life of 99Mo is still only 66 hours.
Molybdenum-99 spontaneously decays to excited states of 99Tc through beta decay. Over 87% of the decays lead to the excited state of 99mTc. A electron and a electron antineutrino are emitted in the process (99Mo → 99mTc + + ). The electrons are easily shielded for transport, and 99mTc generators are only minor radiation hazards, mostly due to secondary X-rays produced by the electrons (also known as Bremsstrahlung).
At the hospital, the 99mTc that forms through 99Mo decay is chemically extracted from the technetium-99m generator. Most commercial 99Mo/99mTc generators use column chromatography, in which 99Mo in the form of water-soluble molybdate, MoO42− is adsorbed onto acid alumina (Al2O3). When the 99Mo decays, it forms pertechnetate TcO4−, which, because of its single charge, is less tightly bound to the alumina. Pulling normal saline solution through the column of immobilized 99MoO42− elutes the soluble 99mTcO4−, resulting in a saline solution containing the 99mTc as the dissolved sodium salt of the pertechnetate. One technetium-99m generator, holding only a few micrograms of 99Mo, can potentially diagnose 10,000 patients because it will be producing 99mTc strongly for over a week.
Preparation
Technetium exits the generator in the form of the pertechnetate ion, TcO4−. The oxidation state of Tc in this compound is +7. This is directly suitable for medical applications only in bone scans (it is taken up by osteoblasts) and some thyroid scans (it is taken up in place of iodine by normal thyroid tissues). In other types of scans relying on 99mTc, a reducing agent is added to the pertechnetate solution to bring the oxidation state of the technecium down to +3 or +4. Secondly, a ligand is added to form a coordination complex. The ligand is chosen to have an affinity for the specific organ to be targeted. For example, the exametazime complex of Tc in oxidation state +3 is able to cross the blood–brain barrier and flow through the vessels in the brain for cerebral blood flow imaging. Other ligands include sestamibi for myocardial perfusion imaging and mercapto acetyl triglycine for MAG3 scan to measure renal function.
Medical uses
In 1970, Eckelman and Richards presented the first "kit" containing all the ingredients required to release the 99mTc, "milked" from the generator, in the chemical form to be administered to the patient.
Technetium-99m is used in 20 million diagnostic nuclear medical procedures every year. Approximately 85% of diagnostic imaging procedures in nuclear medicine use this isotope as radioactive tracer. Klaus Schwochau's book Technetium lists 31 radiopharmaceuticals based on 99mTc for imaging and functional studies of the brain, myocardium, thyroid, lungs, liver, gallbladder, kidneys, skeleton, blood, and tumors. A more recent review is also available.
Depending on the procedure, the 99mTc is tagged (or bound to) a pharmaceutical that transports it to its required location. For example, when 99mTc is chemically bound to exametazime (HMPAO), the drug is able to cross the blood–brain barrier and flow through the vessels in the brain for cerebral blood-flow imaging. This combination is also used for labeling white blood cells (99mTc labeled WBC) to visualize sites of infection. 99mTc sestamibi is used for myocardial perfusion imaging, which shows how well the blood flows through the heart. Imaging to measure renal function is done by attaching 99mTc to mercaptoacetyl triglycine (MAG3); this procedure is known as a MAG3 scan.
Technetium-99m (Tc-99m) can be readily detected in the body by medical equipment because it emits 140.5 keV gamma rays (these are about the same wavelength as emitted by conventional X-ray diagnostic equipment), and its half-life for gamma emission is six hours (meaning 94% of it decays to 99Tc in 24 hours). Besides, it emits virtually no beta radiation, thus keeping radiation dosage low. Its decay product, 99Tc, has a relatively long half-life (211,000 years) and emits little radiation. The short physical half-life of 99mTc and its biological half-life of 1 day with its other favourable properties allows scanning procedures to collect data rapidly and keep total patient radiation exposure low. Chemically, technetium-99m is selectively concentrated in the stomach, thyroid, and salivary glands, and excluded from cerebrospinal fluid; combining it with perchlorate abolishes its selectiveness.
Radiation side-effects
Diagnostic treatment involving technetium-99m will result in radiation exposure to technicians, patients, and passers-by. Typical quantities of technetium administered for immunoscintigraphy tests, such as SPECT tests, range from (millicurie or mCi; and Mega-Becquerel or MBq) for adults. These doses result in radiation exposures to the patient around 10 mSv (1000 mrem), the equivalent of about 500 chest X-ray exposures. This level of radiation exposure is estimated by the linear no-threshold model to carry a 1 in 1000 lifetime risk of developing a solid cancer or leukemia in the patient. The risk is higher in younger patients, and lower in older ones. Unlike a chest x-ray, the radiation source is inside the patient and will be carried around for a few days, exposing others to second-hand radiation. A spouse who stays constantly by the side of the patient through this time might receive one thousandth of patient's radiation dose this way.
The short half-life of the isotope allows for scanning procedures that collect data rapidly. The isotope is also of a very low energy level for a gamma emitter. Its ~140 keV of energy make it safer for use because of the substantially reduced ionization compared with other gamma emitters. The energy of gammas from 99mTc is about the same as the radiation from a commercial diagnostic X-ray machine, although the number of gammas emitted results in radiation doses more comparable to X-ray studies like computed tomography.
Technetium-99m has several features that make it safer than other possible isotopes. Its gamma decay mode can be easily detected by a camera, allowing the use of smaller quantities. And because technetium-99m has a short half-life, its quick decay into the far less radioactive technetium-99 results in relatively low total radiation dose to the patient per unit of initial activity after administration, as compared with other radioisotopes. In the form administered in these medical tests (usually pertechnetate), technetium-99m and technetium-99 are eliminated from the body within a few days.
3-D scanning technique: SPECT
Single-photon emission computed tomography (SPECT) is a nuclear medicine imaging technique using gamma rays. It may be used with any gamma-emitting isotope, including 99mTc. In the use of technetium-99m, the radioisotope is administered to the patient and the escaping gamma rays are incident upon a moving gamma camera which computes and processes the image. To acquire SPECT images, the gamma camera is rotated around the patient. Projections are acquired at defined points during the rotation, typically every three to six degrees. In most cases, a full 360° rotation is used to obtain an optimal reconstruction. The time taken to obtain each projection is also variable, but 15–20 seconds are typical. This gives a total scan time of 15–20 minutes.
The technetium-99m radioisotope is used predominantly in bone and brain scans. For bone scans, the pertechnetate ion is used directly, as it is taken up by osteoblasts attempting to heal a skeletal injury, or (in some cases) as a reaction of these cells to a tumor (either primary or metastatic) in the bone. In brain scanning, 99mTc is attached to the chelating agent HMPAO to create technetium (99mTc) exametazime, an agent which localizes in the brain according to region blood flow, making it useful for the detection of stroke and dementing illnesses that decrease regional brain flow and metabolism.
Most recently, technetium-99m scintigraphy has been combined with CT coregistration technology to produce SPECT/CT scans. These employ the same radioligands and have the same uses as SPECT scanning, but are able to provide even finer 3-D localization of high-uptake tissues, in cases where finer resolution is needed. An example is the sestamibi parathyroid scan which is performed using the 99mTc radioligand sestamibi, and can be done in either SPECT or SPECT/CT machines.
Bone scan
The nuclear medicine technique commonly called the bone scan usually uses 99mTc. It is not to be confused with the "bone density scan", DEXA, which is a low-exposure X-ray test measuring bone density to look for osteoporosis and other diseases where bones lose mass without rebuilding activity. The nuclear medicine technique is sensitive to areas of unusual bone rebuilding activity, since the radiopharmaceutical is taken up by osteoblast cells which build bone. The technique therefore is sensitive to fractures and bone reaction to bone tumors, including metastases. For a bone scan, the patient is injected with a small amount of radioactive material, such as of 99mTc-medronic acid and then scanned with a gamma camera. Medronic acid is a phosphate derivative which can exchange places with bone phosphate in regions of active bone growth, so anchoring the radioisotope to that specific region. To view small lesions (less than ) especially in the spine, the SPECT imaging technique may be required, but currently in the United States, most insurance companies require separate authorization for SPECT imaging.
Myocardial perfusion imaging
Myocardial perfusion imaging (MPI) is a form of functional cardiac imaging, used for the diagnosis of ischemic heart disease. The underlying principle is, under conditions of stress, diseased myocardium receives less blood flow than normal myocardium. MPI is one of several types of cardiac stress test. As a nuclear stress test, the average radiation exposure is 9.4 mSv, which when compared with a typical 2 view chest X-ray (.1 mSv) is equivalent to 94 Chest X-rays.
Several radiopharmaceuticals and radionuclides may be used for this, each giving different information. In the myocardial perfusion scans using 99mTc, the radiopharmaceuticals 99mTc-tetrofosmin (Myoview, GE Healthcare) or 99mTc-sestamibi (Cardiolite, Bristol-Myers Squibb) are used. Following this, myocardial stress is induced, either by exercise or pharmacologically with adenosine, dobutamine or dipyridamole(Persantine), which increase the heart rate or by regadenoson(Lexiscan), a vasodilator. (Aminophylline can be used to reverse the effects of dipyridamole and regadenoson). Scanning may then be performed with a conventional gamma camera, or with SPECT/CT.
Cardiac ventriculography
In cardiac ventriculography, a radionuclide, usually 99mTc, is injected, and the heart is imaged to evaluate the flow through it, to evaluate coronary artery disease, valvular heart disease, congenital heart diseases, cardiomyopathy, and other cardiac disorders. As a nuclear stress test, the average radiation exposure is 9.4 mSv, which when compared with a typical 2 view chest X-ray (.1 mSv) is equivalent to 94 Chest X-Rays. It exposes patients to less radiation than comparable chest X-ray studies.
Functional brain imaging
Usually the gamma-emitting tracer used in functional brain imaging is 99mTc-HMPAO (hexamethylpropylene amine oxime, exametazime). The similar 99mTc-EC tracer may also be used. These molecules are preferentially distributed to regions of high brain blood flow, and act to assess brain metabolism regionally, in an attempt to diagnose and differentiate the different causal pathologies of dementia. When used with the 3-D SPECT technique, they compete with brain FDG-PET scans and fMRI brain scans as techniques to map the regional metabolic rate of brain tissue.
Sentinel-node identification
The radioactive properties of 99mTc can be used to identify the predominant lymph nodes draining a cancer, such as breast cancer or malignant melanoma. This is usually performed at the time of biopsy or resection.99mTc-labelled filtered sulfur colloid or Technetium (99mTc) tilmanocept are injected intradermally around the intended biopsy site. The general location of the sentinel node is determined with the use of a handheld scanner with a gamma-sensor probe that detects the technetium-99m–labeled tracer that was previously injected around the biopsy site. An injection of Methylene blue or isosulfan blue is done at the same time to dye any draining nodes visibly blue. An incision is then made over the area of highest radionuclide accumulation, and the sentinel node is identified within the incision by inspection; the isosulfan blue dye will usually stain any lymph nodes blue that are draining from the area around the tumor.
Immunoscintigraphy
Immunoscintigraphy incorporates 99mTc into a monoclonal antibody, an immune system protein, capable of binding to cancer cells. A few hours after injection, medical equipment is used to detect the gamma rays emitted by the 99mTc; higher concentrations indicate where the tumor is. This technique is particularly useful for detecting hard-to-find cancers, such as those affecting the intestines. These modified antibodies are sold by the German company Hoechst (now part of Sanofi-Aventis) under the name Scintimun.
Blood pool labeling
When 99mTc is combined with a tin compound, it binds to red blood cells and can therefore be used to map circulatory system disorders. It is commonly used to detect gastrointestinal bleeding sites as well as ejection fraction, heart wall motion abnormalities, abnormal shunting, and to perform ventriculography.
Pyrophosphate for heart damage
A pyrophosphate ion with 99mTc adheres to calcium deposits in damaged heart muscle, making it useful to gauge damage after a heart attack.
Sulfur colloid for spleen scan
The sulfur colloid of 99mTc is scavenged by the spleen, making it possible to image the structure of the spleen.
Meckel's diverticulum
Pertechnetate is actively accumulated and secreted by the mucoid cells of the gastric mucosa, and therefore, technetate(VII) radiolabeled with Tc99m is injected into the body when looking for ectopic gastric tissue as is found in a Meckel's diverticulum with Meckel's Scans.
Pulmonary
Carbon inhalation aerosol labeled with technetium-99m (Technegas) is indicated for the visualization of pulmonary ventilation and the evaluation of pulmonary embolism.
See also
Cholescintigraphy
Isotopes of technetium
Transient equilibrium
Notes
References
Citations
Bibliography
Further reading
External links
99mTc production simulator – IAEA
Metastable isotopes
Medical physics
Radiochemistry
Medicinal radiochemistry
Medical isotopes | Technetium-99m | [
"Physics",
"Chemistry"
] | 6,788 | [
"Applied and interdisciplinary physics",
"Medicinal radiochemistry",
"Metastable isotopes",
"Isotopes",
"Medical physics",
"Radiochemistry",
"Medicinal chemistry",
"Chemicals in medicine",
"Radioactivity",
"Medical isotopes"
] |
9,100,987 | https://en.wikipedia.org/wiki/Lists%20of%20physics%20equations | In physics, there are equations in every field to relate physical quantities to each other and perform calculations. Entire handbooks of equations can only summarize most of the full subject, else are highly specialized within a certain field. Physics is derived of formulae only.
General scope
Variables commonly used in physics
Continuity equation
Constitutive equation
Specific scope
Defining equation (physical chemistry)
List of equations in classical mechanics
Table of thermodynamic equations
List of equations in wave theory
List of relativistic equations
List of equations in fluid mechanics
List of electromagnetism equations
List of equations in gravitation
List of photonics equations
List of equations in quantum mechanics
List of equations in nuclear and particle physics
See also
List of equations
Operator (physics)
Laws of science
Units and nomenclature
Physical constant
Physical quantity
SI units
SI derived unit
SI electromagnetism units
List of common physics notations | Lists of physics equations | [
"Physics"
] | 179 | [
"Equations of physics",
"Lists of physics equations"
] |
9,103,824 | https://en.wikipedia.org/wiki/Porch%20collapse | Porch collapse or balcony collapse is a phenomenon typically associated with older or poorly constructed multi-storey apartment buildings that have wooden porch extensions on the front or rear of the building. The collapses have a number of causes, including overloading due to excessive weight from overoccupancy (too many people). Overoccupancy can result from guests filling a porch at a party, from people seeking cooler breezes during a heat wave, or from people filling a porch while seeking shelter from the rain. It may be from the weight of furniture/appliances, wading pools, or air conditioner compressors. After years of rain and snow, it may be from rotted wood, soil subsidence under the porch foundation, rust of nails and fasteners, and not being built to specifications required by modern-day building codes. Many older porches were built before codes required them to be able to support a legally mandated load of so many pounds per square foot or metre, and porches are often not as sturdily built as interior structures.
The phenomenon is associated with older or poorly constructed multistorey apartment buildings with wooden porches. Architect Stanley Tigerman said that in New York City one finds steel fire escapes, but in Chicago, the distance to alleys behind multistorey brick buildings encouraged the construction of wooden multistorey porches.
While not an everyday occurrence, collapses happen often enough in Chicago that city building inspectors make a point of checking porches when making inspections. People have been killed and injured by collapses of wooden porches in other cities as well.
A Chicago porch collapse during a get-together in the Lincoln Park neighborhood in 2003 killed 13 people. The weight of approximately 70 people caused the recently renovated porch of the 1890s vintage building to fail. The disaster inspired a 2005 episode of the ER television show. In June 2008, a third-story balcony collapsed in Ottawa, injuring six persons.
Deck collapse
A related hazard is the collapse of decks built as outdoor extensions of single-family homes. When decks became a popular feature, construction practice did not keep up with the new fashion, resulting in many decks that were simply nailed to the side of their houses. The inherently weak connections were prone to failure, resulting in collapse with injuries and occasional death. In recent years building codes have been re-written to require direct structural connection to the adjoining structure, as well as cross-bracing to withstand sway.
High-rise apartments
High-rise apartment buildings are typically constructed from reinforced concrete. Many buildings have been designed with the concrete slab extending through the exterior walls to form balconies. Since they are relatively thin and unprotected from the weather, they are vulnerable to corrosion of reinforcing and potential collapse. Many buildings constructed in the 1950s and 1960s are undergoing repair or removal of such balcony structures at considerable cost.
See also
2003 Chicago balcony collapse
Berkeley balcony collapse
References
Building defects | Porch collapse | [
"Materials_science"
] | 584 | [
"Mechanical failure",
"Building defects"
] |
9,105,131 | https://en.wikipedia.org/wiki/InterPlanetary%20Network | The InterPlanetary Network (IPN) is a group of spacecraft equipped with gamma ray burst (GRB) detectors. By timing the arrival of a burst at several spacecraft, its precise location can be found. The precision for determining the direction of a GRB in the sky is improved by increasing the spacing of the detectors, and also by more accurate timing of the reception. Typical spacecraft baselines of about one AU (astronomical unit) and time resolutions of tens of milliseconds can determine a burst location within several arcminutes, allowing follow-up observations with other telescopes.
Rationale
Gamma rays are too energetic to be focused with mirrors. The rays penetrate mirror materials instead of reflecting. Because gamma rays cannot be focused into an image in the traditional sense, a unique location for a gamma ray source cannot be determined as it is done with less energetic light.
In addition, gamma ray bursts are brief flashes (often as little as 0.2 seconds) that occur randomly across the sky. Some forms of gamma ray telescope can generate an image, but they require longer integration times, and cover only a fraction of the sky.
Once three spacecraft detect a GRB, their timings are sent to the ground for correlation. A sky position is derived, and distributed to the astronomical community for follow-up observations with optical, radio, or spaceborne telescopes.
Iterations of the IPN
Note that, since any IPN must consist of several spacecraft, the boundaries between networks are defined differently by different commentators.
Spacecraft naturally join or leave service as their missions unfold, and some modern spacecraft are far more capable than prior IPN members.
A "planetary network"
The Vela group of satellites was originally designed to detect covert nuclear tests, possibly at the Moon's altitude. Thus, the Velas were placed in high orbits, so that a time delay would occur between spacecraft triggers. In addition, each satellite had multiple gamma-ray detectors across their structures; the detectors facing a blast would register a higher gamma count than the detectors facing away.
A gamma-ray burst was detected by the Vela group on June 3, 1969, and thus referred to as GRB 690603. The location was determined to be clearly outside of the satellites' orbit, and probably outside of the Solar system. After reviewing archived Vela data, a previous burst was determined to have occurred on July 2, 1967. Public reports of initial GRBs were not disclosed until the early 1970s.
Further missions
Additional spacecraft were given gamma-ray detectors. The Apollo 15 and 16 missions carried detectors to study the Moon; middle-to-late Venera spacecraft carried detectors to Venus. The relatively long baselines of these missions again showed that bursts originated at great distances. Other spacecraft (such as the OGO, OSO, and IMP series) had detectors for Earth, Solar, or all-sky gamma radiation, and also confirmed the GRB phenomenon.
The first true IPN
Scientists began to tailor instruments specifically for GRBs. The Helios-2 spacecraft carried a detector with precision time resolution to a Solar orbit that took it over one AU from Earth. Helios-2 was launched in 1976.
In 1978, multiple spacecraft were launched, forming the necessary baselines for a position determination. The Pioneer Venus Orbiter and its Soviet counterparts, Venera 11 and 12, took gamma detectors to the orbit of Venus. In addition, the spacecraft Prognoz-7 and ISEE-3 remained in Earth orbit. These formed an Earth-Venus-Sun triangle, and the probes at Venus formed a smaller triangle. 84 bursts were detected, until the network degraded in 1980. The Pioneer Venus Orbiter continued until it entered the Venus atmosphere in 1992, but not enough other spacecraft were functioning to form the required baselines.
On March 5 and 6, 1979, two bursts of hard X-rays were detected from the same source in the constellation Dorado by the γ-ray burst detector Konus, on the Venera 11 and Venera 12 spacecraft. These X-ray bursts were detected by several other spacecraft. As part of the InterPlanetary Network (IPN), Venera 11, Venera 12 were hit by the March 5, 1979, hard X-ray burst at ~10:51 EST, followed 11 s later by Helios 2 in orbit around the Sun, then the Pioneer Venus Orbiter at Venus. Seconds later the Vela satellites, Prognoz 7, and the Einstein Observatory in orbit around Earth were inundated. The last satellite hit was the ISEE-3 before the burst exited the Solar System.
The second IPN
Pioneer Venus Orbiter was rejoined by Ulysses in 1990. The launch of the Compton Gamma-Ray Observatory in 1991 again formed triangular baselines with PVO and Ulysses. Ulysses continued until June 2009, and the PVO mission ended in August 1992.
Compton once again brought directional discrimination with the BATSE instrument. Like the Velas, BATSE placed detectors at the spacecraft corners. Thus, Compton alone could determine a coarse burst location, to within 1.6 to 4 degrees. Baselines with other spacecraft were then used to sharpen Compton's position solutions. In addition, almost half the sky from Compton was blocked by the Earth, just as Venus blocked part of the sky for PVO. Detection or non-detection by Compton or PVO added another element to the location algorithms.
Compton also had high-precision, low-field-of-view gamma instruments. Occasionally, GRBs would occur where Compton happened to be pointing. The use of multiple, sensitive instruments would provide much more accuracy than BATSE alone.
The "third" IPN
Compton and Ulysses were joined briefly by Mars Observer in late 1992, before that spacecraft failed. Some feel that Compton provided sufficient continuity, and that the distinction between 2nd, 3rd, and subsequent IPNs is semantic.
"Additional" IPNs
Compton and Ulysses were joined by Wind in 1994. Although Wind was in Earth orbit, like Compton, its altitude was very high, thus forming a short but usable baseline. The high altitude also meant that Earth blockage was negligible. In addition, Wind carried a top and bottom detector. Interpolation between the two units usually gave a general sky direction for bursts, which in many cases could augment the IPN algorithm. The addition of RXTE in 1995 also helped. Although RXTE was an X-ray mission in Earth orbit, it could detect those gamma-ray bursts which also shone in X-rays, and give a direction (rather than merely a time trigger) for them.
Two important developments occurred in 1996. NEAR was launched; its trajectory to an asteroid again formed a triangular IPN measured in AUs. The IPN was also joined by BeppoSAX. BeppoSAX had wide-field gamma detectors, and narrow-field X-ray telescopes. Once a GRB was detected, operators could spin the spacecraft within hours to point the X-ray telescopes at the coarse location. The X-ray afterglow would then give a fine location. In 1997, the first fine location allowed detailed study of a GRB and its environ.
Compton was deorbited in 2000; the NEAR mission was shut down in early 2001. In late 2001, the Mars Odyssey spacecraft again formed an interplanetary triangle.
Other members of the network include or have included the Indian SROSS-C2 spacecraft, the US Air Force's Defense Meteorological Satellites, the Japanese Yohkoh spacecraft, and the Chinese SZ-2 mission. These have all been Earth orbiters, and the Chinese and Indian detectors were operational for only a few months.
Of all the above, Ulysses is the only spacecraft whose orbit takes it large distances away from the ecliptic plane. These deviations from the ecliptic plane allow more precise 3-D measurements of the apparent positions of the GRBs.
The 21st century: staring spacecraft
New techniques and designs in high-energy astronomy spacecraft are challenging the traditional operation of the IPN. Because distant probes require sensitive ground antennas for communication, they introduce a time lag into GRB studies. Large ground antennas must split time between spacecraft, rather than listen continuously for GRB notifications. Typically, GRB coordinates determined by deep space probes are distributed many hours to a day or two after the GRB. This is very frustrating for studies of events which are measured in seconds.
A new generation of spacecraft are designed to produce GRB locations on board, then relay them to the ground within minutes or even seconds. These positions are based not on time correlation, but on X-ray telescopes, as on BeppoSAX but much faster. HETE-2, launched in 2000, stares at a large region of sky. Should a GRB trigger the gamma detectors, X-ray masks report sky coordinates to ground stations. Because HETE is in a low, consistent orbit, it can use many inexpensive ground stations. There is almost always a ground station in view of the spacecraft, which reduces latency to seconds.
The Swift spacecraft, launched in 2004, is similar in operation but much more powerful. When a GRB triggers the gamma detectors, generating a crude position, the spacecraft spins relatively rapidly to use its focusing X-ray and optical telescopes. These refine the GRB location to within arcminutes, and often within arcseconds. The fine position is reported to the ground in approximately an hour.
INTEGRAL is a successor to Compton. INTEGRAL can similarly determine a coarse position by comparing gamma counts from one side to another. It also possesses a gamma-ray telescope with an ability to determine positions to under a degree. INTEGRAL cannot pivot rapidly like the small HETE and Swift spacecraft. But should a burst happen to occur in its telescope field of view, its position and characteristics can be recorded with high precision.
RHESSI was launched in 2002 to perform solar studies. However, its gamma instrument could detect bright gamma sources from other regions of the sky, and produce coarse positions through differential detectors. Occasionally, a GRB would appear next to the Sun, and the RHESSI instrument would determine its properties without IPN assistance.
Note however, that all these spacecraft suffer from Earth blockage to varying degrees. Also, the more sophisticated the "staring" instrument, the lower the sky coverage. Randomly occurring GRBs are more likely to be missed, or detected at low resolution only. The use of non-directional deep space probes, such as MESSENGER and BepiColombo, will continue.
Current IPN developments
In 2007 AGILE was launched and in 2008 the Fermi Gamma-ray Space Telescope and although these are Earth orbiters, their instruments provide directional discrimination. The Fermi Space Telescope uses both wide-area burst detectors and a narrow-angle telescope, and has a limited ability to spin itself to place a GRB within the telescope field. MESSENGER's Gamma Ray Neutron Spectrometer was able to add data to the IPN, before the end of MESSENGER's mission in 2015. Due to falling power from its RTG, Ulysses was decommissioned on June 30, 2009.
See also
Gamma-ray Burst Coordinates Network
References
External links
Third Interplanetary Network Current IPN website, including data for download, etc.
IPN Progress Report A Quarterly Refereed Journal
IPN Status Report IPN status as of September 24, 2007.
Proposed spacecraft
Gamma-ray astronomy
Gamma-ray bursts | InterPlanetary Network | [
"Physics",
"Astronomy"
] | 2,332 | [
"Gamma-ray astronomy",
"Physical phenomena",
"Astronomical events",
"Gamma-ray bursts",
"Stellar phenomena",
"Astronomical sub-disciplines"
] |
9,105,584 | https://en.wikipedia.org/wiki/Leverett%20J-function | In petroleum engineering, the Leverett J-function is a dimensionless function of water saturation describing the capillary pressure,
where is the water saturation measured as a fraction, is the capillary pressure (in pascal), is the permeability (measured in m²), is the porosity (0-1), is the surface tension (in N/m) and is the contact angle. The function is important in that it is constant for a given saturation within a reservoir, thus relating reservoir properties for neighboring beds.
The Leverett J-function is an attempt at extrapolating capillary pressure data for a given rock to rocks that are similar but with differing permeability, porosity and wetting properties. It assumes that the porous rock can be modelled as a bundle of non-connecting capillary tubes, where the factor is a characteristic length of the capillaries' radii.
This function is also widely used in modeling two-phase flow of proton-exchange membrane fuel cells. A large degree of hydration is needed for good proton conductivity while large liquid water saturation in pores of catalyst layer or diffusion media will impede gas transport in the cathode.
J-function in analyzing capillary pressure data is analogous with TEM-function in analyzing relative permeability data.
See also
Amott test
TEM-function
References
External links
http://www.ux.uis.no/~s-skj/ResTek1-v03/Notater/Tamu.Lecture.Notes/Capillary.Pressure/Lecture_16.ppt
http://perminc.com/resources/fundamentals-of-fluid-flow-in-porous-media/chapter-2-the-porous-medium/multi-phase-saturated-rock-properties/averaging-capillary-pressure-data-leverett-j-function/
Leverett J-Function in Multiphase Saturated Rocks
Petroleum engineering | Leverett J-function | [
"Chemistry",
"Engineering"
] | 418 | [
"Petroleum engineering",
"Energy engineering",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
9,105,774 | https://en.wikipedia.org/wiki/Cable%20lacing | Cable lacing is a method for tying wiring harnesses and cable looms, traditionally used in telecommunication, naval, and aerospace applications. This old cable management technique, taught to generations of lineworkers, is still used in some modern applications since it does not create obstructions along the length of the cable, avoiding the handling problems of cables groomed by plastic or hook-and-loop cable ties.
Cable lacing uses a thin cord, which is traditionally made of waxed linen, to bind together a group of cables using a series of running lockstitches. Flat lacing tapes made of modern materials such as nylon, polyester, Teflon, fiberglass, and Nomex are also available with a variety of coatings to improve knot holding.
Styles
The lacing begins and ends with a whipping or other knot to secure the free ends. Wraps are spaced relative to the overall harness diameter to maintain the wiring in a tight, neat bundle, and the ends are then neatly trimmed. In addition to continuous or running lacing, there are a variety of lacing patterns used in different circumstances. In some cases stand-alone knots called spot ties are also used. For lashing large cables and cable bundles to support structures in telecommunications applications, there are two named cable lacing styles: the "Chicago stitch" and "Kansas City stitch".
Some organizations have in-house standards to which cable lacing must conform, for example NASA specifies its cable lacing techniques in chapter 9 of NASA-STD-8739.4.
Examples
Notes and references
External links
NASA Technical Standard NASA-STD-8739.4 on Crimping, Interconnecting Cables, Harnesses, and Wiring
Online excerpt from Electronic Installation Practices Manual (1951), "Chapter 9, Cabling"
Online excerpt from Workmanship and Design Practices for Electronic Equipment (1962)
Cable lacing tutorial using modern lacing tape
History, tools, and techniques
FAA Advisory Circular 43.13-1B paragraph 11-158
Signal cables
Aerospace engineering
Ropework | Cable lacing | [
"Engineering"
] | 414 | [
"Aerospace engineering"
] |
17,807,140 | https://en.wikipedia.org/wiki/Priming%20%28immunology%29 | Priming is the first contact that antigen-specific T helper cell precursors have with an antigen. It is essential to the T helper cells' subsequent interaction with B cells to produce antibodies. Priming of antigen-specific naive lymphocytes occurs when antigen is presented to them in immunogenic form (capable of inducing an immune response). Subsequently, the primed cells will differentiate either into effector cells or into memory cells that can mount stronger and faster response to second and upcoming immune challenges. T and B cell priming occurs in the secondary lymphoid organs (lymph nodes and spleen).
Priming of naïve T cells requires dendritic cell antigen presentation. Priming of naive CD8 T cells generates cytotoxic T cells capable of directly killing pathogen-infected cells. CD4 cells develop into a diverse array of effector cell types depending on the nature of the signals they receive during priming. CD4 effector activity can include cytotoxicity, but more frequently it involves the secretion of a set of cytokines that directs the target cell to make a particular response. This activation of naive T cell is controlled by a variety of signals: recognition of antigen in the form of a peptide: MHC complex on the surface of a specialized antigen-presenting cell delivers signal 1; interaction of co-stimulatory molecules on antigen-presenting cells with receptors on T cells delivers signal 2 (one notable example includes a B7 ligand complex on antigen-presenting cells binding to the CD28 receptor on T cells); and cytokines that control differentiation into different types of effector cells deliver signal 3.
Cross-priming
Cross-priming refers to the stimulation of antigen-specific CD8+ cytotoxic T lymphocytes (CTLs) by dendritic cell presenting an antigen acquired from the outside of the cell. Cross-priming is also called immunogenic cross-presentation. This mechanism is vital for priming of CTLs against viruses and tumours.
Immune priming (invertebrate immunity)
Immune priming is a memory-like phenomenon described in invertebrate taxa of animals, first described by Hans G. Boman and colleagues using Drosophila fruit flies. In vertebrates, immune memory is based on adaptive immune cells called B and T lymphocytes, which provide an enhanced and faster immune response when challenged with the same pathogen for a second time. It is evolutionarily advantageous for an organism to produce a rapid immune response to common pathogens it is likely to be exposed to again. In the 1940s-1960s, the budding field of immunology assumed that invertebrates did not have memory-like immune functions as they do not produce antibodies needed for adaptive immunity. In 1972, Boman and colleagues' experiments overturned this assumption, showing that fruit flies could be "vaccinated" against a repeat infection by the same bacteria if they were first exposed to a freeze-thawed pathogen. Flies previously exposed to freeze-thawed bacteria cleared subsequent infection better than naive flies. Since then, evidence supporting innate memory-like functions have been found across model invertebrates, including insects and crustaceans.
Mechanism of immune priming
Results of immune priming research commonly find that mechanism conferring defense against a given pathogen is dependent on the kind of insect species and microbe used for given experiment. That could be due to host-pathogen coevolution. For every species is convenient to develop a specialised defense against a pathogen (e.g. bacterial strain) that it encounters the most. In arthropod model, the red flour beetle Tribolium castaneum, it has been shown that the route of infection (cuticular, septic or oral) is important for the defence mechanism generation. Innate immunity in insects is based on non-cellular mechanisms, including production of antimicrobial peptides (AMPs), reactive oxygen species (ROS) or activation of the prophenol oxidase cascade. Cellular parts of insect innate immunity are hemocytes, which can eliminate pathogens by nodulation, encapsulation or phagocytosis. The innate response during immune priming differs based on the experimental setup, but generally it involves enhancement of humoral innate immune mechanisms and increased levels of hemocytes. There are two hypothetical scenarios of immune induction, on which immune priming mechanism could be based. The first mechanism is induction of long-lasting defences, such as circulating immune molecules, by the priming antigens in the host body, which remain until the secondary encounter. The second mechanism describes a drop after the initial priming response, but a stronger defence upon a secondary challenge. The most probable scenario is the combination of these two mechanisms.
Trans-generational immune priming
Trans-generational immune priming (TGIP) describes the transfer of parental immunological experience to its progeny, which may help the survival of the offspring when challenged with the same pathogen. Similar mechanism of offspring protection against pathogens has been studied for a very long time in vertebrates, where the transfer of maternal antibodies helps the newborns immune system fight an infection before its immune system can function properly on its own. In the last two decades TGIP in invertebrates was heavily studied. Evidence supporting TGIP were found in all colleopteran, crustacean, hymenopteran, orthopteran and mollusk species, but in some other species the results still remain contradictory. The experimental outcome could be influenced by the procedure used for particular investigation. Some of these parameters include the infection procedure, the sex of the offspring and the parent and the developmental stage.
References
Priming | Priming (immunology) | [
"Biology"
] | 1,167 | [
"Immunology"
] |
17,809,734 | https://en.wikipedia.org/wiki/Carumonam | Carumonam (INN) is a monobactam antibiotic. It is very resistant to beta-lactamases, which means that it is more difficult for bacteria to break down using β-lactamase enzymes.
References
Monobactam antibiotics
Thiazoles
Carbamates
Sulfamates | Carumonam | [
"Chemistry"
] | 65 | [
"Sulfamates",
"Functional groups"
] |
17,810,899 | https://en.wikipedia.org/wiki/Multifuel | Multifuel, sometimes spelled multi-fuel, is any type of engine, boiler, or heater or other fuel-burning device which is designed to burn multiple types of fuels in its operation. One common application of multifuel technology is in military settings, where the normally-used diesel or gas turbine fuel might not be available during combat operations for vehicles or heating units. Multifuel engines and boilers have a long history, but the growing need to establish fuel sources other than petroleum for transportation, heating, and other uses has led to increased development of multifuel technology for non-military use as well, leading to many flexible-fuel vehicle designs in recent decades.
A multifuel engine is constructed so that its compression ratio permits firing the lowest octane fuel of the various accepted alternative fuels. A strengthening of the engine is necessary in order to meet these higher demands. Multifuel engines sometimes have switch settings that are set manually to take different octanes, or types, of fuel.
Types
Multifuel systems can be classified by the fuel-burning appliance it is based on. For internal combustion engines there are:
Multifuel diesel engines.
Multifuel gas turbines.
Flexible-fuel petrol engines. Limited to fuels that can be spark-ignited.
For heaters, see multi-fuel stove.
Military multifuel engines
One common use of this technology is in military vehicles, so that they may run a wide range of alternative fuels such as gasoline or jet fuel. This is seen as desirable in a military setting as enemy action or unit isolation may limit the available fuel supply, and conversely enemy fuel sources, or civilian sources, may become available for usage.
One large use of a military multifuel engine was the LD series used in the US M35 -ton and M54 5-ton trucks built between 1963 and 1970. A military standard design using M.A.N. technology, it was able to use different fuels without preparation. Its primary fuel was Diesel #1, #2, or AP, but 70% to 90% of other fuels could be mixed with diesel, depending on how smooth the engine would run. Low octane commercial and aviation gasoline could be used if engine oil was added, jet fuel Jet A, B, JP-4, 5, 7, and 8 could be used, as well as fuel oil #1 and #2. In practice, they only used diesel fuel, their tactical advantage was never needed, and in time they were replaced with commercial diesel engines. Another use of multifuel engines is the American M1 Abrams Main battle tank, which uses a multifuel gas turbine engine.
Currently, a wide range of Russian military vehicles employ multifuel engines, such as the T-72 tank (multifuel diesel) and the T-80 (multifuel gas turbine).
Non-military usage
Many other types of engines and other heat-generating machinery are designed to burn more than one type of fuel. For instance, some heaters and boilers designed for home use can burn wood, pellets, and other fuel sources. These offer fuel flexibility and security, but are more expensive than are standard single fuel engines. Portable stoves are sometimes designed with multifuel functionality, in order to burn whatever fuel is found during an outing. Innovative industrial heaters or burners were the subject of multi-fuel research at a Shell plant in 2014.
The movement to establish alternatives to automobiles running solely on gasoline has greatly increased the number of automobiles available which use multifuel engines, such vehicles generally being termed a bi-fuel vehicle or flexible-fuel vehicle.
Underperformance issues
Multifuel engines are not necessarily underpowered, but in practice some engines have had issues with power due to design compromises necessary to burn multiple types of fuel in the same engine. Perhaps the most notorious example from a military perspective is the L60 engine used by the British Chieftain Main Battle Tank, which resulted in a very sluggish performance – in fact, the Mark I Chieftain (used only for training and similar activities) was so underpowered that some were incapable of mounting a tank transporter. An equally serious issue was that changing from one fuel to another often required hours of preparation.
The US LD series had a power output comparable to commercial diesels of the time. It was underpowered for the 5-ton trucks, but that was due the engine size itself; the replacement diesel was much larger and more powerful. The LD engines did burn diesel fuel poorly and were very smokey. The final LDT-465 model received a turbocharger largely to clean up the exhaust, there was little power increase.
See also
Flexible-fuel vehicle
Multi-fuel stove
Footnotes
References
Dunstan, Simon. Chieftain Main Battle Tank 1965–2003. Osprey Publishing, 2003.
Jacobson, Cliff. Expedition Canoeing: A Guide To Canoeing Wild Rivers In North America. Globe Pequot, 2005.
Pahl, Greg. Natural Home Heating: The Complete Guide to Renewable Energy. Chelsea Green Publishing, 2003.
Taylor, Charles Fayette. The Internal-combustion Engine in Theory and Practice. MIT Press, 1985.
Engines
Fuel technology
Energy development | Multifuel | [
"Physics",
"Technology"
] | 1,056 | [
"Physical systems",
"Machines",
"Engines"
] |
13,838,905 | https://en.wikipedia.org/wiki/Anatomic%20space | In anatomy, a spatium or anatomic space is a space (cavity or gap). Anatomic spaces are often landmarks to find other important structures. When they fill with gases (such as air) or liquids (such as blood) in pathological ways, they can suffer conditions such as pneumothorax, edema, or pericardial effusion. Many anatomic spaces are potential spaces, which means that they are potential rather than realized (with their realization being dynamic according to physiologic or pathophysiologic events). In other words, they are like an empty plastic bag that has not been opened (two walls collapsed against each other; no interior volume until opened) or a balloon that has not been inflated.
Examples of anatomic spaces (or potential spaces) include:
Axillary space
Buccal space
Canine space
Cystohepatic triangle
Deep perineal space
Deep temporal space
Epidural space
Extraperitoneal space
Fascial spaces of the head and neck
Infratemporal space
Intercostal space
Intermembrane space
Interstitial spaces
Mental space
Pericardial space
Intraperitoneal space
Pleural space
Potential space
Pterygomandibular space
Quadrangular space
Retroperitoneal space
Retropharyngeal space
Retropubic space
Subarachnoid space
Subdural space
Sublingual space
Submandibular space
Submasseteric space
Traube's space
See also
Body cavity
Anatomy | Anatomic space | [
"Biology"
] | 315 | [
"Anatomy"
] |
13,844,097 | https://en.wikipedia.org/wiki/Packing%20dimension | In mathematics, the packing dimension is one of a number of concepts that can be used to define the dimension of a subset of a metric space. Packing dimension is in some sense dual to Hausdorff dimension, since packing dimension is constructed by "packing" small open balls inside the given subset, whereas Hausdorff dimension is constructed by covering the given subset by such small open balls. The packing dimension was introduced by C. Tricot Jr. in 1982.
Definitions
Let (X, d) be a metric space with a subset S ⊆ X and let s ≥ 0 be a real number. The s-dimensional packing pre-measure of S is defined to be
Unfortunately, this is just a pre-measure and not a true measure on subsets of X, as can be seen by considering dense, countable subsets. However, the pre-measure leads to a bona fide measure: the s-dimensional packing measure of S is defined to be
i.e., the packing measure of S is the infimum of the packing pre-measures of countable covers of S.
Having done this, the packing dimension dimP(S) of S is defined analogously to the Hausdorff dimension:
An example
The following example is the simplest situation where Hausdorff and packing dimensions may differ.
Fix a sequence such that and . Define inductively a nested sequence of compact subsets of the real line as follows: Let . For each connected component of (which will necessarily be an interval of length ), delete the middle interval of length , obtaining two intervals of length , which will be taken as connected components of . Next, define . Then is topologically a Cantor set (i.e., a compact totally disconnected perfect space). For example, will be the usual middle-thirds Cantor set if .
It is possible to show that the Hausdorff and the packing dimensions of the set are given respectively by:
It follows easily that given numbers , one can choose a sequence as above such that the associated (topological) Cantor set has Hausdorff dimension and packing dimension .
Generalizations
One can consider dimension functions more general than "diameter to the s": for any function h : [0, +∞) → [0, +∞], let the packing pre-measure of S with dimension function h be given by
and define the packing measure of S with dimension function h by
The function h is said to be an exact (packing) dimension function for S if Ph(S) is both finite and strictly positive.
Properties
If S is a subset of n-dimensional Euclidean space Rn with its usual metric, then the packing dimension of S is equal to the upper modified box dimension of S: This result is interesting because it shows how a dimension derived from a measure (packing dimension) agrees with one derived without using a measure (the modified box dimension).
Note, however, that the packing dimension is not equal to the box dimension. For example, the set of rationals Q has box dimension one and packing dimension zero.
See also
Hausdorff dimension
Minkowski–Bouligand dimension
References
Dimension theory
Fractals
Metric geometry | Packing dimension | [
"Mathematics"
] | 641 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical objects",
"Fractals",
"Mathematical relations"
] |
2,330,489 | https://en.wikipedia.org/wiki/Phosphate-buffered%20saline | Phosphate-buffered saline (PBS) is a buffer solution (pH ~ 7.4) commonly used in biological research. It is a water-based salt solution containing disodium hydrogen phosphate, sodium chloride and, in some formulations, potassium chloride and potassium dihydrogen phosphate. The buffer helps to maintain a constant pH. The osmolarity and ion concentrations of the solutions are isotonic, meaning they match those of the human body.
Applications
PBS has many uses because it is isotonic and non-toxic to most cells. These uses include substance dilution and cell container rinsing. PBS with EDTA is also used to disengage attached and clumped cells. Divalent metals such as zinc, however, cannot be added as this will result in precipitation. For these types of applications, Good's buffers are recommended. PBS has been shown to be an acceptable alternative to viral transport medium regarding transport and storage of RNA viruses, such as SARS-CoV-2.
Preparation
There are many different ways to prepare PBS solutions, common ones are Dulbecco's phosphate-buffered saline (DPBS) and the Cold Spring Harbor protocol. Some formulations of DPBS do not contain potassium and magnesium, while other ones contain calcium and/or magnesium (depending on whether or not the buffer is used on live or fixed tissue: the latter does not require CaCl2 or MgCl2 ).
Start with 800 mL of distilled water to dissolve all salts. Add distilled water to a total volume of 1 liter. The resultant 1× PBS will have a final concentration of 157 mM Na+, 140mM Cl−, 4.45mM K+, 10.1 mM HPO42−, 1.76 mM H2PO4− and a pH of 7.96. Add 2.84 mM of HCl to shift the buffer to 7.3 mM HPO42− and 4.6 mM H2PO4− for a final pH of 7.4 and a Cl− concentration of 142 mM.
The pH of PBS is ~7.4. When making buffer solutions, it is good practice to always measure the pH directly using a pH meter. If necessary, pH can be adjusted using hydrochloric acid or sodium hydroxide.
PBS can also be prepared by using commercially made PBS buffer tablets or pouches.
If used in cell culturing, the solution can be dispensed into aliquots and sterilized by autoclaving or filtration. Sterilization may not be necessary depending on its use. PBS can be stored at room temperature or in the refrigerator. However, concentrated stock solutions may precipitate when cooled and should be kept at room temperature until precipitate has completely dissolved before use.
Dependence of pH on ionic strength and temperature
The Henderson–Hasselbalch equation gives the pH of a solution relative to the pKa of the acid–base pair. However the pKa is dependent on ionic strength and temperature, and as it shifts so will the pH of a solution based on that acid–base pair. Because the doubly charged [HPO4]2− is stabilized more by high ionic strength than is the singly-charged [H2PO4]−, their pKa is somewhat dependent on ionic strength. The often-cited pKa of ~7.2 is the value extrapolated to zero ionic strength, and is not applicable at physiological ionic strength.
Phillips et al. measured the pKa at 10, 25, and 37 °C at various ionic strengths. For the latter two temperatures they report pKa in Debye-Hückel equations (plotted in the accompanying figure for μ up to 0.5 M):
at 25 °C: pKa2 = 7.18 − 1.52 sqrt(μ) + 1.96 μ
at 37 °C: pKa2 = 7.15 − 1.56 sqrt(μ) + 1.22 μ
The pKa0 is weakly dependent on temperature. Phillips et al. reported ∆H0 at 25 °C of 760 cal/mol (3180 J/mol) and a linear dependence of pKa0 on 1/T (Van 't Hoff equation). The positive ∆H0 results in an increase in Ka, and thus a decrease in pKa0 with rising temperature, the change in pKa0 being 166 × the change in (1/T), which around 25 °C results in a change in pKa0 of −0.00187 per degree. This applies strictly to the extrapolated thermodynamic pKa0 at infinite dilution, and as the figure shows, the temperature effect can be much larger at higher ionic strength.
See also
Borate-buffered saline
Tris-buffered saline
Brine
References
External links
http://www.bioind.com/products/cell-culture/cell-culture-reagents/balanced-salt-solutions/dpbs-dulbecco-s-phosphate-buffered-saline/
Buffer solutions
Biochemistry
Biochemistry methods
Cell culture reagents | Phosphate-buffered saline | [
"Chemistry",
"Biology"
] | 1,061 | [
"Biochemistry methods",
"Buffer solutions",
"Cell culture reagents",
"nan",
"Biochemistry",
"Reagents for biochemistry"
] |
2,331,095 | https://en.wikipedia.org/wiki/Molecular%20memory | Molecular memory is a term for data storage technologies that use molecular species as the data storage element, rather than e.g. circuits, magnetics, inorganic materials or physical shapes. The molecular component can be described as a molecular switch, and may perform this function by any of several mechanisms, including charge storage, photochromism, or changes in capacitance. In a perfect molecular memory device, each individual molecule contains a bit of data, leading to massive data capacity. However, practical devices are more likely to use large numbers of molecules for each bit, in the manner of 3D optical data storage (many examples of which can be considered molecular memory devices). The term "molecular memory" is most often used to mean very fast, electronically addressed solid-state data storage, as is the term computer memory. At present, molecular memories are still found only in laboratories.
Examples
One approach to molecular memories is based on special compounds such as porphyrin-based polymers which are capable of storing electric charge. Once a certain voltage threshold is achieved the material oxidizes, releasing an electric charge. The process is reversible, in effect creating an electric capacitor. The properties of the material allow for a much greater capacitance per unit area than with conventional DRAM memory, thus potentially leading to smaller and cheaper integrated circuits.
Several universities and a number of companies (Hewlett-Packard, ZettaCore) have announced work on molecular memories, which some hope will supplant DRAM memory as the lowest cost technology for high-speed computer memory. NASA is also supporting research on non-volatile molecular memories.
In 2018, researches from the University of Jyväskylä in Finland, developed a molecular memory which can memorize the direction of a magnetic field for long periods of time after being switched off at extremely low temperatures, which would aid in enhancing the storage capacity of hard disk drives without enlarging their physical size.
References
External links
Nonvolatile Molecular Memory - NASA
Molecular memory a game-changer - article from Phys.Org
DNA-interfaced Molecular Memory - article from Acs.org
Graphene flash memory- nanowerk
Computer memory
Molecular electronics
Nanoelectronics | Molecular memory | [
"Chemistry",
"Materials_science"
] | 452 | [
"Nanotechnology",
"Molecular physics",
"Molecular electronics",
"Nanoelectronics"
] |
2,331,297 | https://en.wikipedia.org/wiki/Critical%20radius | Critical radius is the minimum particle size from which an aggregate is thermodynamically stable. In other words, it is the lowest radius formed by atoms or molecules clustering together (in a gas, liquid or solid matrix) before a new phase inclusion (a bubble, a droplet or a solid particle) is viable and begins to grow. Formation of such stable nuclei is called nucleation.
At the beginning of the nucleation process, the system finds itself in an initial phase. Afterwards, the formation of aggregates or clusters from the new phase occurs gradually and randomly at the nanoscale. Subsequently, if the process is feasible, the nucleus is formed. Notice that the formation of aggregates is conceivable under specific conditions. When these conditions are not satisfied, a rapid creation-annihilation of aggregates takes place and the nucleation and posterior crystal growth process does not happen.
In precipitation models, nucleation is generally a prelude to models of the crystal growth process. Sometimes precipitation is rate-limited by the nucleation process. An example would be when someone takes a cup of superheated water from a microwave and, when jiggling it with a spoon or against the wall of the cup, heterogeneous nucleation occurs and most of water particles convert into steam.
If the change in phase forms a crystalline solid in a liquid matrix, the atoms might then form a dendrite. The crystal growth continues in three dimensions, the atoms attaching themselves in certain preferred directions, usually along the axes of a crystal, forming a characteristic tree-like structure of a dendrite.
Mathematical derivation
The critical radius of a system can be determined from its Gibbs free energy.
It has two components, the volume energy and the surface energy . The first one describes how probable it is to have a phase change and the second one is the amount of energy needed to create an interface.
The mathematical expression of , considering spherical particles, is given by:
where is the Gibbs free energy per volume and obeys . It is defined as the energy difference between one system at a certain temperature and the same system at the fusion temperature and it depends on pressure, the number of particles and temperature: . For a low temperature, far from the fusion point, this energy is big (it is more difficult to change the phase) and for a temperature close to the fusion point it is small (the system will tend to change its phase).
Regarding and considering spherical particles, its mathematical expression is given by:
where is the surface tension we need to break to create a nucleus. The value of the is never negative as it always takes energy to create an interface.
The total Gibbs free energy is therefore:
The critical radius is found by optimization, setting the derivative of equal to zero.
yielding
,
where is the surface tension and is the absolute value of the Gibbs free energy per volume.
The Gibbs free energy of nuclear formation is found replacing the critical radius expression in the general formula.
Interpretation
When the Gibbs free energy change is positive, the nucleation process will not be prosperous. The nanoparticle radius is small, the surface term prevails the volume term . Contrary, if the variation rate is negative, it will be thermodynamically stable. The size of the cluster surpasses the critical radius. In this occasion, the volume term overcomes the superficial term .
From the expression of the critical radius, as the Gibbs volume energy increases, the critical radius will decrease and hence, it will be easier achieving the formation of nuclei and begin the crystallization process.
Methods for reducing the critical radius
Supercooling
In order to decrease the value of the critical radius and promote nucleation, a supercooling or superheating process may be used.
Supercooling is a phenomenon in which the system's temperature is lowered under the phase transition temperature without the creation of the new phase. Let be the temperature difference, where is the phase transition temperature. Let be the volume Gibbs free energy, enthalpy and entropy respectively.
When , the system has null Gibbs free energy, so:
In general, the following approximations can be done:
and
Consequently:
So:
Substituting this result on the expressions for and , the following equations are obtained:
Notice that and diminish with an increasing supercooling. Analogously, a mathematical derivation for the superheating can be done.
Supersaturation
Supersaturation is a phenomenon where the concentration of a solute exceeds the value of the equilibrium concentration.
From the definition of chemical potential , where is the Boltzmann constant, is the solute concentration and is the equilibrium concentration. For a stoichiometric compound and considering and , where is the atomic volume:
Defining the supersaturation as this can be rewritten as
Finally, the critical radius and the Gibbs free energy of nuclear formation can be obtained as
,
where is the molar volume and is the molar gas constant.
See also
Nucleation
Ostwald ripening
Supercooling
Superheating
References
N.H.Fletcher, Size Effect in Heterogeneous Nucleation, J.Chem.Phys.29, 1958, 572.
Nguyen T. K. Thanh,* N. Maclean, and S. Mahiddine, Mechanisms of Nucleation and Growth of Nanoparticles in Solution, Chem. Rev. 2014, 114, 15, 7610-7630.
Critical phenomena
Phase transitions
Radii | Critical radius | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 1,122 | [
"Physical phenomena",
"Phase transitions",
"Critical phenomena",
"Phases of matter",
"Condensed matter physics",
"Statistical mechanics",
"Matter",
"Dynamical systems"
] |
2,331,527 | https://en.wikipedia.org/wiki/Time-dependent%20density%20functional%20theory | Time-dependent density-functional theory (TDDFT) is a quantum mechanical theory used in physics and chemistry to investigate the properties and dynamics of many-body systems in the presence of time-dependent potentials, such as electric or magnetic fields. The effect of such fields on molecules and solids can be studied with TDDFT to extract features like excitation energies, frequency-dependent response properties, and photoabsorption spectra.
TDDFT is an extension of density-functional theory (DFT), and the conceptual and computational foundations are analogous – to show that the (time-dependent) wave function is equivalent to the (time-dependent) electronic density, and then to derive the effective potential of a fictitious non-interacting system which returns the same density as any given interacting system. The issue of constructing such a system is more complex for TDDFT, most notably because the time-dependent effective potential at any given instant depends on the value of the density at all previous times. Consequently, the development of time-dependent approximations for the implementation of TDDFT is behind that of DFT, with applications routinely ignoring this memory requirement.
Overview
The formal foundation of TDDFT is the Runge–Gross (RG) theorem (1984) – the time-dependent analogue of the Hohenberg–Kohn (HK) theorem (1964). The RG theorem shows that, for a given initial wavefunction, there is a unique mapping between the time-dependent external potential of a system and its time-dependent density. This implies that the many-body wavefunction, depending upon 3N variables, is equivalent to the density, which depends upon only 3, and that all properties of a system can thus be determined from knowledge of the density alone. Unlike in DFT, there is no general minimization principle in time-dependent quantum mechanics. Consequently, the proof of the RG theorem is more involved than the HK theorem.
Given the RG theorem, the next step in developing a computationally useful method is to determine the fictitious non-interacting system which has the same density as the physical (interacting) system of interest. As in DFT, this is called the (time-dependent) Kohn–Sham system. This system is formally found as the stationary point of an action functional defined in the Keldysh formalism.
The most popular application of TDDFT is in the calculation of the energies of excited states of isolated systems and, less commonly, solids. Such calculations are based on the fact that the linear response function – that is, how the electron density changes when the external potential changes – has poles at the exact excitation energies of a system. Such calculations require, in addition to the exchange-correlation potential, the exchange-correlation kernel – the functional derivative of the exchange-correlation potential with respect to the density.
Formalism
Runge–Gross theorem
The approach of Runge and Gross considers a single-component system in the presence of a time-dependent scalar field for which the Hamiltonian takes the form
where T is the kinetic energy operator, W the electron-electron interaction, and Vext(t) the external potential which along with the number of electrons defines the system. Nominally, the external potential contains the electrons' interaction with the nuclei of the system. For non-trivial time-dependence, an additional explicitly time-dependent potential is present which can arise, for example, from a time-dependent electric or magnetic field. The many-body wavefunction evolves according to the time-dependent Schrödinger equation under a single initial condition,
Employing the Schrödinger equation as its starting point, the Runge–Gross theorem shows that at any time, the density uniquely determines the external potential. This is done in two steps:
Assuming that the external potential can be expanded in a Taylor series about a given time, it is shown that two external potentials differing by more than an additive constant generate different current densities.
Employing the continuity equation, it is then shown that for finite systems, different current densities correspond to different electron densities.
Time-dependent Kohn–Sham system
For a given interaction potential, the RG theorem shows that the external potential uniquely determines the density. The Kohn–Sham approaches chooses a non-interacting system (that for which the interaction potential is zero) in which to form the density that is equal to the interacting system. The advantage of doing so lies in the ease in which non-interacting systems can be solved – the wave function of a non-interacting system can be represented as a Slater determinant of single-particle orbitals, each of which are determined by a single partial differential equation in three variable – and that the kinetic energy of a non-interacting system can be expressed exactly in terms of those orbitals. The problem is thus to determine a potential, denoted as vs(r,t) or vKS(r,t), that determines a non-interacting Hamiltonian, Hs,
which in turn determines a determinantal wave function
which is constructed in terms of a set of N orbitals which obey the equation,
and generate a time-dependent density
such that ρs is equal to the density of the interacting system at all times:
Note that in the expression of density above, the summation is over all Kohn–Sham orbitals and is the time-dependent occupation number for orbital . If the potential vs(r,t) can be determined, or at the least well-approximated, then the original Schrödinger equation, a single partial differential equation in 3N variables, has been replaced by N differential equations in 3 dimensions, each differing only in the initial condition.
The problem of determining approximations to the Kohn–Sham potential is challenging. Analogously to DFT, the time-dependent KS potential is decomposed to extract the external potential of the system and the time-dependent Coulomb interaction, vJ. The remaining component is the exchange-correlation potential:
In their seminal paper, Runge and Gross approached the definition of the KS potential through an action-based argument starting from the Dirac action
Treated as a functional of the wave function, A[Ψ], variations of the wave function yield the many-body Schrödinger equation as the stationary point. Given the unique mapping between densities and wave function, Runge and Gross then treated the Dirac action as a density functional,
and derived a formal expression for the exchange-correlation component of the action, which determines the exchange-correlation potential by functional differentiation. Later it was observed that an approach based on the Dirac action yields paradoxical conclusions when considering the causality of the response functions it generates. The density response function, the functional derivative of the density with respect to the external potential, should be causal: a change in the potential at a given time can not affect the density at earlier times. The response functions from the Dirac action however are symmetric in time so lack the required causal structure. An approach which does not suffer from this issue was later introduced through an action based on the Keldysh formalism of complex-time path integration. An alternative resolution of the causality paradox through a refinement of the action principle in real time has been recently proposed by Vignale.
Linear response TDDFT
Linear-response TDDFT can be used if the external perturbation is small in the sense that it does not completely destroy the ground-state structure of the system. In this case one can analyze the linear response of the system. This is a great advantage as, to first order, the variation of the system will depend only on the ground-state wave-function so that we can simply use all the properties of DFT.
Consider a small time-dependent external perturbation .
This gives
and looking at the linear response of the density
where
Here and in the following it is assumed that primed variables are integrated.
Within the linear-response domain, the variation of the Hartree (H) and the exchange-correlation (xc) potential to linear order may be expanded with respect to the density variation
and
Finally, inserting this relation in the response equation for the KS system and comparing
the resultant equation with the response equation for the physical system yields the Dyson
equation of TDDFT:
From this last equation it is possible to derive the excitation energies of the system, as these are simply the poles of the response function.
Other linear-response approaches include the Casida formalism (an expansion in electron-hole pairs) and the Sternheimer equation (density-functional perturbation theory).
Key papers
Books on TDDFT
TDDFT codes
ELK
Firefly
GAMESS-US
Gaussian
Amsterdam Density Functional
deMon2k
CP2K
Dalton
NWChem
Octopus
pw-teleman library
PARSEC
Qbox/Qb@ll
Q-Chem
Spartan
TeraChem
TURBOMOLE
YAMBO code
ORCA
Jaguar
GPAW
ONETEP
VASP
Quantum ESPRESSO
References
External links
tddft.org
Brief introduction of TD-DFT
Density functional theory
Computational chemistry
Computational physics
Quantum chemistry
Theoretical chemistry | Time-dependent density functional theory | [
"Physics",
"Chemistry"
] | 1,864 | [
"Density functional theory",
"Quantum chemistry",
"Quantum mechanics",
"Computational physics",
"Theoretical chemistry",
"Computational chemistry",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
2,332,188 | https://en.wikipedia.org/wiki/Iodine%20pentafluoride | Iodine pentafluoride is an interhalogen compound with chemical formula IF5. It is one of the fluorides of iodine. It is a colorless liquid, although impure samples appear yellow. It is used as a fluorination reagent and even a solvent in specialized syntheses.
Preparation
It was first synthesized by Henri Moissan in 1891 by burning solid iodine in fluorine gas. This exothermic reaction is still used to produce iodine pentafluoride, although the reaction conditions have been improved.
I2 + 5 F2 → 2 IF5
Reactions
IF5 reacts vigorously with water forming hydrofluoric acid and iodic acid:
IF5 + 3 H2O → HIO3 + 5 HF
Upon treatment with fluorine, it converts to iodine heptafluoride:
IF5 + F2 → IF7
It has been used as a solvent for handling metal fluorides. For example, the reduction of osmium hexafluoride to osmium pentafluoride with iodine is conducted in a solution in iodine pentafluoride:
10 OsF6 + I2 → 10 OsF5 + 2 IF5
Primary amines react with iodine pentafluoride forming nitriles after hydrolysis.
References
Further reading
External links
WebBook page for IF5
National Pollutant Inventory - fluoride and compounds fact sheet
web elements listing
Fluorides
Iodine compounds
Interhalogen compounds
Fluorinating agents
Oxidizing agents
Inorganic solvents | Iodine pentafluoride | [
"Chemistry"
] | 322 | [
"Redox",
"Interhalogen compounds",
"Oxidizing agents",
"Salts",
"Fluorinating agents",
"Reagents for organic chemistry",
"Fluorides"
] |
2,332,266 | https://en.wikipedia.org/wiki/Iodine%20heptafluoride | Iodine heptafluoride is an interhalogen compound with the chemical formula IF7. It has an unusual pentagonal bipyramidal structure, with D5h symmetry, as predicted by VSEPR theory. The molecule can undergo a pseudorotational rearrangement called the Bartell mechanism, which is like the Berry mechanism but for a heptacoordinated system.
Below 4.5 °C, IF7 forms a snow-white powder of colorless crystals, melting at 5-6 °C. However, this melting is difficult to observe, as the liquid form is thermodynamically unstable at 760 mmHg: instead, the compound begins to sublime at 4.77 °C. The dense vapor has a mouldy, acrid odour.
Preparation
IF7 is prepared by passing F2 through liquid IF5 at 90 °C, then heating the vapours to 270 °C. Alternatively, this compound can be prepared from fluorine and dried palladium or potassium iodide to minimize the formation of IOF5, an impurity arising by hydrolysis. Iodine heptafluoride is also produced as a by-product when dioxygenyl hexafluoroplatinate is used to prepare other platinum(V) compounds such as potassium hexafluoroplatinate(V), using potassium fluoride in iodine pentafluoride solution:
2 O2PtF6 + 2 KF + IF5 → 2 KPtF6 + 2 O2 + IF7
Reactions
Iodine heptafluoride decomposes at 200 °C to fluorine gas and iodine pentafluoride.
Safety considerations
IF7 is highly irritating to both the skin and the mucous membranes. It also is a strong oxidizer and can cause fire on contact with organic material.
References
Common sources
External links
WebBook page for IF7
National Pollutant Inventory - Fluoride and compounds fact sheet
web elements listing
Fluorides
Iodine compounds
Interhalogen compounds
Oxidizing agents
Hypervalent molecules | Iodine heptafluoride | [
"Physics",
"Chemistry"
] | 427 | [
"Redox",
"Molecules",
"Interhalogen compounds",
"Oxidizing agents",
"Salts",
"Hypervalent molecules",
"Fluorides",
"Matter"
] |
2,332,569 | https://en.wikipedia.org/wiki/Indicator%20diagram | An indicator diagram is a chart used to measure the thermal, or cylinder, performance of reciprocating steam and internal combustion engines and compressors. An indicator chart records the pressure in the cylinder versus the volume swept by the piston, throughout the two or four strokes of the piston which constitute the engine, or compressor, cycle. The indicator diagram is used to calculate the work done and the power produced in an engine cylinder or used in a compressor cylinder.
The indicator diagram was developed by James Watt and his employee John Southern to help understand how to improve the efficiency of steam engines. In 1796, Southern developed the simple, but critical, technique to generate the diagram by fixing a board so as to move with the piston, thereby tracing the "volume" axis, while a pencil, attached to a pressure gauge, moved at right angles to the piston, tracing "pressure".
The indicator diagram constitutes one of the earliest examples of statistical graphics. It may be significant that Watt and Southern developed the indicator diagram at roughly the same time that William Playfair (a former Boulton & Watt employee who continued an amicable correspondence with Watt) published The Commercial and Political Atlas, a book often cited as the first to employ statistical graphics.
The gauge enabled Watt to calculate the work done by the steam while ensuring that its pressure had dropped to zero by the end of the stroke, thereby ensuring that all useful energy had been extracted. The total work could be calculated from the area between the "volume" axis and the traced line. The latter fact had been realised by Davies Gilbert as early as 1792 and used by Jonathan Hornblower in litigation against Watt over patents on various designs. Daniel Bernoulli had also had the insight about how to calculate work.
Watt used the diagram to make radical improvements to steam engine performance and long kept it a trade secret. Though it was made public in a letter to the Quarterly Journal of Science in 1822, it remained somewhat obscure, John Farey, Jr. only learned of it on seeing it used, probably by Watt's men, when he visited Russia in 1826.
In 1834, Émile Clapeyron used a diagram of pressure against volume to illustrate and elucidate the Carnot cycle, elevating it to a central position in the study of thermodynamics.
Later instruments for steam engine (illus.) used paper wrapped around a cylindrical barrel with a pressure piston inside it, the rotation of the barrel coupled to the piston crosshead by a weight- or spring-tensioned wire.
In 1869 the British marine engineer Nicholas Procter Burgh wrote a full book on the indicator diagram explaining the device step by step. He had noticed that "a very large proportion of the young members of the engineering profession look at an indicator diagram as a mysterious production."
Indicators developed for steam engines were improved for internal combustion engines with their rapid changes in pressure, resulting from combustion, and higher speeds. In addition to using indicator diagrams for calculating power they are used to understand the ignition, injection timing and combustion events which occur near dead-center, when the engine piston and indicator drum are hardly moving. Much better information during this part of the cycle is obtained by offsetting the indicator motion by 90 degrees to the engine crank, giving an offset indicator diagram. The events are recorded when the velocity of the drum is near its maximum and are shown against crank-angle instead of stroke.
See also
Pressure–volume diagram
Temperature–entropy diagram
Thermodynamic cycle
References
Bibliography
Pacey, A.J. & Fisher, S.J. (1967) "Daniel Bernoulli and the vis viva of compressed air", The British Journal for the History of Science 3 (4), pp. 388–392,
British Transport Commission (1957) Handbook for Railway Steam Locomotive Enginemen, London : B.T.C., p. 81, (facsimile copy publ. Ian Allan (1977), )
External links
Energy conversion
Piston engines
Steam power
Thermodynamics
Diagrams | Indicator diagram | [
"Physics",
"Chemistry",
"Mathematics",
"Technology"
] | 815 | [
"Physical quantities",
"Engines",
"Piston engines",
"Steam power",
"Power (physics)",
"Thermodynamics",
"Dynamical systems"
] |
7,570,573 | https://en.wikipedia.org/wiki/Interval%20scheduling | Interval scheduling is a class of problems in computer science, particularly in the area of algorithm design. The problems consider a set of tasks. Each task is represented by an interval describing the time in which it needs to be processed by some machine (or, equivalently, scheduled on some resource). For instance, task A might run from 2:00 to 5:00, task B might run from 4:00 to 10:00 and task C might run from 9:00 to 11:00. A subset of intervals is compatible if no two intervals overlap on the machine/resource. For example, the subset {A,C} is compatible, as is the subset {B}; but neither {A,B} nor {B,C} are compatible subsets, because the corresponding intervals within each subset overlap.
The interval scheduling maximization problem (ISMP) is to find a largest compatible set, i.e., a set of non-overlapping intervals of maximum size. The goal here is to execute as many tasks as possible, that is, to maximize the throughput. It is equivalent to finding a maximum independent set in an interval graph.
A generalization of the problem considers machines/resources. Here the goal is to find compatible subsets whose union is the largest.
In an upgraded version of the problem, the intervals are partitioned into groups. A subset of intervals is compatible if no two intervals overlap, and moreover, no two intervals belong to the same group (i.e., the subset contains at most a single representative of each group). Each group of intervals corresponds to a single task, and represents several alternative intervals in which it can be executed.
The group interval scheduling decision problem (GISDP) is to decide whether there exists a compatible set in which all groups are represented. The goal here is to execute a single representative task from each group. GISDPk is a restricted version of GISDP in which the number of intervals in each group is at most k.
The group interval scheduling maximization problem (GISMP) is to find a largest compatible set - a set of non-overlapping representatives of maximum size. The goal here is to execute a representative task from as many groups as possible. GISMPk is a restricted version of GISMP in which the number of intervals in each group is at most k. This problem is often called JISPk, where J stands for Job.
GISMP is the most general problem; the other two problems can be seen as special cases of it:
ISMP is the special case in which each task belongs to its own group (i.e. it is equal to GISMP1).
GISDP is the problem of deciding whether the maximum exactly equals the number of groups.
All these problems can be generalized by adding a weight for each interval, representing the profit from executing the task in that interval. Then, the goal is to maximize the total weight.
All these problems are special cases of single-machine scheduling, since they assume that all tasks must run on a single processor. Single-machine scheduling is a special case of optimal job scheduling.
Single-Interval Scheduling Maximization
Single-interval scheduling refers to creating an interval schedule in which no intervals overlap.
Unweighted
Several algorithms, that may look promising at first sight, actually do not find the optimal solution:
Selecting the intervals that start earliest is not an optimal solution, because if the earliest interval happens to be very long, accepting it would make us reject many other shorter requests.
Selecting the shortest intervals or selecting intervals with the fewest conflicts is also not optimal.
The following greedy algorithm, called Earliest deadline first scheduling, does find the optimal solution for unweighted single-interval scheduling:
Select the interval, x, with the earliest finishing time.
Remove x, and all intervals intersecting x, from the set of candidate intervals.
Repeat until the set of candidate intervals is empty.
Whenever we select an interval at step 1, we may have to remove many intervals in step 2. However, all these intervals necessarily cross the finishing time of x, and thus they all cross each other. Hence, at most 1 of these intervals can be in the optimal solution. Hence, for every interval in the optimal solution, there is an interval in the greedy solution. This proves that the greedy algorithm indeed finds an optimal solution.
A more formal explanation is given by a Charging argument.
The greedy algorithm can be executed in time O(n log n), where n is the number of tasks, using a preprocessing step in which the tasks are sorted by their finishing times.
Weighted
Problems involving weighted interval scheduling are equivalent to finding a maximum-weight independent set in an interval graph. Such problems can be solved in polynomial time.
Assuming the vectors are sorted from earliest to latest finish time, the following pseudocode determines the maximum weight of a single-interval schedule in Θ(n) time:// The vectors are already sorted from earliest to latest finish time.
int v[numOfVectors + 1]; // list of interval vectors
int w[numOfVectors + 1]; // w[j] is the weight for v[j].
int p[numOfVectors + 1]; // p[j] is the # of vectors that end before v[j] begins.
int M[numOfVectors + 1];
int finalSchedule[];
// v[0] does not exist, and the first interval vector is assigned to v[1].
w[0] = 0; p[0] = 0; M[0] = 0;
// The following code determines the value of M for each vector.
// The maximum weight of the schedule is equal to M[numOfVectors].
for (int i = 1; i < numOfVectors + 1; i++) {
M[i] = max(w[i] + M[p[i]], M[i - 1]);
}
// Function to construct the optimal schedule
schedule (j) {
if (j == 0) { return; }
else if (w[j] + M[p[j]] >= M[j - 1]){
prepend(v[j], finalSchedule); // prepends v[j] to schedule.
schedule(p[j]);
} else { schedule(j - 1); }
}
Example
If we have the following 9 vectors sorted by finish time, with the weights above each corresponding interval, we can determine which of these vectors are included in our maximum weight schedule which only contains a subset of the following vectors.
Here, we input our final vector (where j=9 in this example) into our schedule function from the code block above. We perform the actions in the table below until j is set to 0, at which point, we only include into our final schedule the encountered intervals which met the requirement. This final schedule is the schedule with the maximum weight.
Group Interval Scheduling Decision
NP-complete when some groups contain 3 or more intervals
GISDPk is NP-complete when , even when all intervals have the same length. This can be shown by a reduction from the following version of the Boolean satisfiability problem, which was shown to be NP-complete likewise to the unrestricted version.
Let be a set of Boolean variables. Let be a set of clauses over X such that (1) each clause in C has at most three literals and (2) each variable is restricted to appear once or twice positively and once negatively overall in C. Decide whether there is an assignment to variables of X such that each clause in C has at least one true literal.
Given an instance of this satisfiability problem, construct the following instance of GISDP. All intervals have a length of 3, so it is sufficient to represent each interval by its starting time:
For every variable (for ), create a group with two intervals: one starting at (representing the assignment ) and another starting at (representing the assignment ).
For every clause (for ), create a group with the following intervals:
For every variable that appears positively for the first time in C an interval starting at .
For every variable that appears positively for the second time in C an interval starting at . Note that both these intervals intersect the interval , associated with .
For every variable that appears negatively - an interval starting at . This interval intersects the interval associated with .
Note that there is no overlap between intervals in groups associated with different clauses. This is ensured since a variable appears at most twice positively and once negatively.
The constructed GISDP has a feasible solution (i.e. a scheduling in which each group is represented), if and only if the given set of boolean clauses has a satisfying assignment. Hence GISDP3 is NP-complete, and so is GISDPk for every .
Polynomial when all groups contain at most 2 intervals
GISDP2 can be solved at polynomial time by the following reduction to the 2-satisfiability problem:
For every group i create two variables, representing its two intervals: and .
For every group i, create the clauses: and , which represent the assertion that exactly one of these two intervals should be selected.
For every two intersecting intervals (i.e. and ) create the clause: , which represent the assertion that at most one of these two intervals should be selected.
This construction contains at most O(n2) clauses (one for each intersection between intervals, plus two for each group). Each clause contains 2 literals. The satisfiability of such formulas can be decided in time linear in the number of clauses (see 2-SAT). Therefore, the GISDP2 can be solved in polynomial time.
Group Interval Scheduling Maximization
MaxSNP-complete when some groups contain 2 or more intervals
GISMPk is NP-complete even when .
Moreover, GISMPk is MaxSNP-complete, i.e., it does not have a PTAS unless P=NP. This can be proved by showing an approximation-preserving reduction from MAX 3-SAT-3 to GISMP2.
Polynomial 2-approximation
The following greedy algorithm finds a solution that contains at least 1/2 of the optimal number of intervals:
Select the interval, x, with the earliest finishing time.
Remove x, and all intervals intersecting x, and all intervals in the same group of x, from the set of candidate intervals.
Continue until the set of candidate intervals is empty.
A formal explanation is given by a Charging argument.
The approximation factor of 2 is tight. For example, in the following instance of GISMP2:
Group #1: {[0..2], [4..6]}
Group #2: {[1..3]}
The greedy algorithm selects only 1 interval [0..2] from group #1, while an optimal scheduling is to select [1..3] from group #2 and then [4..6] from group #1.
A more general approximation algorithm attains a 2-factor approximation for the weighted case.
LP-based approximation algorithms
Using the technique of Linear programming relaxation, it is possible to approximate the optimal scheduling with slightly better approximation factors. The approximation ratio of the first such algorithm is asymptotically 2 when k is large, but when k=2 the algorithm achieves an approximation ratio of 5/3. The approximation factor for arbitrary k was later improved to 1.582.
Related problems
An interval scheduling problem can be described by an intersection graph, where each vertex is an interval, and there is an edge between two vertices if and only if their intervals overlap. In this representation, the interval scheduling problem is equivalent to finding the maximum independent set in this intersection graph. Finding a maximum independent set is NP-hard in general graphs, but it can be done in polynomial time in the special case of intersection graphs (ISMP).
A group-interval scheduling problem (GISMPk) can be described by a similar interval-intersection graph, with additional edges between each two intervals of the same group, i.e., this is the edge union of an interval graph and a graph consisting of n disjoint cliques of size k.
Variations
An important class of scheduling algorithms is the class of dynamic priority algorithms. When none of the intervals overlap the optimum solution is trivial. The optimum for the non-weighted version can found with the earliest deadline first scheduling. Weighted interval scheduling is a generalization where a value is assigned to each executed task and the goal is to maximize the total value. The solution need not be unique.
The interval scheduling problem is 1-dimensional – only the time dimension is relevant. The Maximum disjoint set problem is a generalization to 2 or more dimensions. This generalization, too, is NP-complete.
Another variation is resource allocation, in which a set of intervals s are scheduled using resources k such that k is minimized. That is, all the intervals must be scheduled, but the objective is to minimize the usage of resources.
Another variation is when there are m processors instead of a single processor. I.e., m different tasks can run in parallel. See identical-machines scheduling.
Single-machine scheduling is also a very similar problem.
Sources
Optimal scheduling
NP-complete problems | Interval scheduling | [
"Mathematics",
"Engineering"
] | 2,760 | [
"Optimal scheduling",
"Industrial engineering",
"Computational problems",
"Mathematical problems",
"NP-complete problems"
] |
9,808,551 | https://en.wikipedia.org/wiki/Mass%20diffusivity | Diffusivity, mass diffusivity or diffusion coefficient is usually written as the proportionality constant between the molar flux due to molecular diffusion and the negative value of the gradient in the concentration of the species. More accurately, the diffusion coefficient times the local concentration is the proportionality constant between the negative value of the mole fraction gradient and the molar flux. This distinction is especially significant in gaseous systems with strong temperature gradients. Diffusivity derives its definition from Fick's law and plays a role in numerous other equations of physical chemistry.
The diffusivity is generally prescribed for a given pair of species and pairwise for a multi-species system. The higher the diffusivity (of one substance with respect to another), the faster they diffuse into each other. Typically, a compound's diffusion coefficient is ~10,000× as great in air as in water. Carbon dioxide in air has a diffusion coefficient of 16 mm2/s, and in water its diffusion coefficient is 0.0016 mm2/s.
Diffusivity has dimensions of length2 / time, or m2/s in SI units and cm2/s in CGS units.
Temperature dependence of the diffusion coefficient
Solids
The diffusion coefficient in solids at different temperatures is generally found to be well predicted by the Arrhenius equation:
where
D is the diffusion coefficient (in m2/s),
D0 is the maximal diffusion coefficient (at infinite temperature; in m2/s),
EA is the activation energy for diffusion (in J/mol),
T is the absolute temperature (in K),
R ≈ 8.31446J/(mol⋅K) is the universal gas constant.
Diffusion in crystalline solids, termed lattice diffusion, is commonly regarded to occur by two distinct mechanisms, interstitial and substitutional or vacancy diffusion. The former mechanism describes diffusion as the motion of the diffusing atoms between interstitial sites in the lattice of the solid it is diffusing into, the latter describes diffusion through a mechanism more analogue to that in liquids or gases: Any crystal at nonzero temperature will have a certain number of vacancy defects (i.e. empty sites on the lattice) due to the random vibrations of atoms on the lattice, an atom neighbouring a vacancy can spontaneously "jump" into the vacancy, such that the vacancy appears to move. By this process the atoms in the solid can move, and diffuse into each other. Of the two mechanisms, interstitial diffusion is typically more rapid.
Liquids
An approximate dependence of the diffusion coefficient on temperature in liquids can often be found using Stokes–Einstein equation, which predicts that
where
D is the diffusion coefficient,
T1 and T2 are the corresponding absolute temperatures,
μ is the dynamic viscosity of the solvent.
Gases
The dependence of the diffusion coefficient on temperature for gases can be expressed using Chapman–Enskog theory (predictions accurate on average to about 8%):
where
D is the diffusion coefficient (cm2/s),
A is approximately (with Boltzmann constant , and Avogadro constant )
1 and 2 index the two kinds of molecules present in the gaseous mixture,
T is the absolute temperature (K),
M is the molar mass (g/mol),
p is the pressure (atm),
is the average collision diameter (the values are tabulated page 545) (Å),
Ω is a temperature-dependent collision integral (the values tabulated for some intermolecular potentials, can be computed from correlations for others, or must be evaluated numerically.) (dimensionless).
The relation
is obtained when inserting the ideal gas law into the expression obtained directly from Chapman-Enskog theory, which may be written as
where is the molar density (mol / m) of the gas, and
,
with the universal gas constant. At moderate densities (i.e. densities at which the gas has a non-negligible co-volume, but is still sufficiently dilute to be considered as gas-like rather than liquid-like) this simple relation no longer holds, and one must resort to Revised Enskog Theory. Revised Enskog Theory predicts a diffusion coefficient that decreases somewhat more rapidly with density, and which to a first approximation may be written as
where is the radial distribution function evaluated at the contact diameter of the particles. For molecules behaving like hard, elastic spheres, this value can be computed from the Carnahan-Starling Equation, while for more realistic intermolecular potentials such as the Mie potential or Lennard-Jones potential, its computation is more complex, and may involve invoking a thermodynamic perturbation theory, such as SAFT.
Pressure dependence of the diffusion coefficient
For self-diffusion in gases at two different pressures (but the same temperature), the following empirical equation has been suggested:
where
D is the diffusion coefficient,
ρ is the gas mass density,
P1 and P2 are the corresponding pressures.
Population dynamics: dependence of the diffusion coefficient on fitness
In population dynamics, kinesis is the change of the diffusion coefficient in response to the change of conditions. In models of purposeful kinesis, diffusion coefficient depends on fitness (or reproduction coefficient) r:
where is constant and r depends on population densities and abiotic characteristics of the living conditions. This dependence is a formalisation of the simple rule: Animals stay longer in good conditions and leave quicker bad conditions (the "Let well enough alone" model).
Effective diffusivity in porous media
The effective diffusion coefficient describes diffusion through the pore space of porous media. It is macroscopic in nature, because it is not individual pores but the entire pore space that needs to be considered. The effective diffusion coefficient for transport through the pores, De, is estimated as follows:
where
D is the diffusion coefficient in gas or liquid filling the pores,
εt is the porosity available for the transport (dimensionless),
δ is the constrictivity (dimensionless),
τ is the tortuosity (dimensionless).
The transport-available porosity equals the total porosity less the pores which, due to their size, are not accessible to the diffusing particles, and less dead-end and blind pores (i.e., pores without being connected to the rest of the pore system). The constrictivity describes the slowing down of diffusion by increasing the viscosity in narrow pores as a result of greater proximity to the average pore wall. It is a function of pore diameter and the size of the diffusing particles.
Example values
Gases at 1 atm., solutes in liquid at infinite dilution. Legend: (s) – solid; (l) – liquid; (g) – gas; (dis) – dissolved.
See also
Atomic diffusion
Effective diffusion coefficient
Lattice diffusion coefficient
Knudsen diffusion
References
Transport phenomena
Diffusion | Mass diffusivity | [
"Physics",
"Chemistry",
"Engineering"
] | 1,440 | [
"Transport phenomena",
"Chemical engineering",
"Physical phenomena",
"Diffusion"
] |
9,812,205 | https://en.wikipedia.org/wiki/Transition%20dipole%20moment | The transition dipole moment or transition moment, usually denoted for a transition between an initial state, , and a final state, , is the electric dipole moment associated with the transition between the two states. In general the transition dipole moment is a complex vector quantity that includes the phase factors associated with the two states. Its direction gives the polarization of the transition, which determines how the system will interact with an electromagnetic wave of a given polarization, while the square of the magnitude gives the strength of the interaction due to the distribution of charge within the system. The SI unit of the transition dipole moment is the Coulomb-meter (Cm); a more conveniently sized unit is the Debye (D).
Definition
A single charged particle
For a transition where a single charged particle changes state from to , the transition dipole moment is
where q is the particle's charge, r is its position, and the integral is over all space ( is shorthand for ). The transition dipole moment is a vector; for example its x-component is
In other words, the transition dipole moment can be viewed as an off-diagonal matrix element of the position operator, multiplied by the particle's charge.
Multiple charged particles
When the transition involves more than one charged particle, the transition dipole moment is defined in an analogous way to an electric dipole moment: The sum of the positions, weighted by charge. If the ith particle has charge qi and position operator ri, then the transition dipole moment is:
In terms of momentum
For a single, nonrelativistic particle of mass m, in zero magnetic field, the transition dipole moment between two energy eigenstates ψa and ψb can alternatively be written in terms of the momentum operator, using the relationship
This relationship can be proven starting from the commutation relation between position x and the Hamiltonian :
Then
However, assuming that ψa and ψb are energy eigenstates with energy Ea and Eb, we can also write
Similar relations hold for y and z, which together give the relationship above.
Analogy with a classical dipole
A basic, phenomenological understanding of the transition dipole moment can be obtained by analogy with a classical dipole. While the comparison can be very useful, care must be taken to ensure that one does not fall into the trap of assuming they are the same.
In the case of two classical point charges, and , with a displacement vector, , pointing from the negative charge to the positive charge, the electric dipole moment is given by
In the presence of an electric field, such as that due to an electromagnetic wave, the two charges will experience a force in opposite directions, leading to a net torque on the dipole. The magnitude of the torque is proportional to both the magnitude of the charges and the separation between them, and varies with the relative angles of the field and the dipole:
Similarly, the coupling between an electromagnetic wave and an atomic transition with transition dipole moment depends on the charge distribution within the atom, the strength of the electric field, and the relative polarizations of the field and the transition. In addition, the transition dipole moment depends on the geometries and relative phases of the initial and final states.
Origin
When an atom or molecule interacts with an electromagnetic wave of frequency , it can undergo a transition from an initial to a final state of energy difference through the coupling of the electromagnetic field to the transition dipole moment. When this transition is from a lower energy state to a higher energy state, this results in the absorption of a photon. A transition from a higher energy state to a lower energy state results in the emission of a photon. If the charge, , is omitted from the electric dipole operator during this calculation, one obtains as used in oscillator strength.
Applications
The transition dipole moment is useful for determining if transitions are allowed under the electric dipole interaction. For example, the transition from a bonding orbital to an antibonding orbital is allowed because the integral defining the transition dipole moment is nonzero. Such a transition occurs between an even and an odd orbital; the dipole operator, , is an odd function of , hence the integrand is an even function. The integral of an odd function over symmetric limits returns a value of zero, while for an even function this is not necessarily the case. This result is reflected in the parity selection rule for electric dipole transitions. The transition moment integral
of an electronic transition within similar atomic orbitals, such as s-s or p-p, is forbidden due to the triple integral returning an ungerade (odd) product. Such transitions only redistribute electrons within the same orbital and will return a zero product. If the triple integral returns a gerade (even) product, the transition is allowed.
See also
Wigner–Eckart theorem
References
Atomic physics
Photochemistry | Transition dipole moment | [
"Physics",
"Chemistry"
] | 1,004 | [
"Quantum mechanics",
"Atomic physics",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
9,817,744 | https://en.wikipedia.org/wiki/Quantum%20concentration | The quantum concentration is the particle concentration (i.e. the number of particles per unit volume) of a system where the interparticle distance is equal to the thermal de Broglie wavelength.
Quantum effects become appreciable when the particle concentration is greater than or equal to the quantum concentration, which is defined as:
where:
is the mass of the particles in the system
is the Boltzmann constant
is the temperature as measured in kelvins
is the reduced Planck constant
The quantum concentration for room temperature protons is about 1/cubic-Angstrom.
As the quantum concentration depends on temperature, high temperatures will put most systems in the classical limit unless they have a very high density e.g. a White dwarf.
For an ideal gas the Sackur–Tetrode equation can be written in terms of the quantum concentration as
References
Statistical mechanics | Quantum concentration | [
"Physics"
] | 174 | [
"Statistical mechanics stubs",
"Statistical mechanics",
"Quantum mechanics",
"Quantum physics stubs"
] |
9,819,961 | https://en.wikipedia.org/wiki/Eicosanoid%20receptor | Most of the eicosanoid receptors are integral membrane protein G protein-coupled receptors (GPCRs) that bind and respond to eicosanoid signaling molecules. Eicosanoids are rapidly metabolized to inactive products and therefore are short-lived. Accordingly, the eicosanoid-receptor interaction is typically limited to a local interaction: cells, upon stimulation, metabolize arachidonic acid to an eicosanoid which then binds cognate receptors on either its parent cell (acting as an autocrine signalling molecule) or on nearby cells (acting as a paracrine signalling molecule) to trigger functional responses within a restricted tissue area, e.g. an inflammatory response to an invading pathogen. In some cases, however, the synthesized eicosanoid travels through the blood (acting as a hormone-like messenger) to trigger systemic or coordinated tissue responses, e.g. prostaglandin (PG) E2 released locally travels to the hypothalamus to trigger a febrile reaction (see ). An example of a non-GPCR receptor that binds many eicosanoids is the PPAR-γ nuclear receptor.
The following is a list of human eicosanoid GPCRs grouped according to the type of eicosanoid ligand that each binds:
Leukotriene
Leukotrienes:
BLT1 (Leukotriene B4 receptor) – ; BLT1 is the primary receptor for leukotriene B4. Relative potencies in binding to and stimulating BLT1 are: leukotriene B4>20-hydroxy-leukotriene B4>>12-Hydroxyeicosatetraenoic acid (R isomer) (http://www.guidetopharmacology.org/GRAC/ObjectDisplayForward?objectId=267; also see ALOX12B and 12-Hydroxyeicosatetraenoic acid). BLT1 activation is associated with pro-inflammatory responses in cells, tissues, and animal models.
BLT2 (Leukotriene B4 receptor 2) – ; the receptor for 12-Hydroxyheptadecatrienoic acid, leukotriene B4, and certain other eicosanoids and polyunsaturated fatty acid metabolites (see BLT2). Relative potencies in binding to and stimulating BLT2 are: 12-hydroxyheptadecatrienoic acid (S isomer)>leukotriene B4>12-Hydroxyeicosatetraenoic acid (S isomer)= 12-hydroperoxyeicosatetraenoic acid (S isomer)>15-Hydroxyeicosatetraenoic acid (S isomer])>12-hydroxyeicosatetraenoic acid (R isomer)>20-hydroxy-leukotriene LTB4 (http://www.guidetopharmacology.org/GRAC/ObjectDisplayForward?objectId=268). Activation of BLT2 is associated with pro-inflammatory responses by cells and tissues.
CysLT1 (Cysteinyl leukotriene receptor 1) – ;CYSLTR1 is the receptor for Leukotriene C4 and Leukotriene D4; in binds and responds to leukotriene C4 more strongly than to leukotriene D4. Relative potencies for binding to and activation CYSLTR1 are: leukotriene C4≥ leukotriene D4>>leukotriene E4 (http://www.guidetopharmacology.org/GRAC/ObjectDisplayForward?objectId=270). Activation of this receptor is associated with pro-allergic responses in cells, tissues, and animal models.
CysLT2 (Cysteinyl leukotriene receptor 2) – ; Similar to CYSLTR1, CYSLTR2 is the receptor for Leukotriene C4 and Leukotriene D4; it binds and responds to the latter two ligands equally well. Relative potencies in binding to and stimulating CYSLTR2 are: leukotriene C4≥leukotriene D4>>leukotriene E4 (http://www.guidetopharmacology.org/GRAC/ObjectDisplayForward?objectId=270). CYSLT2 Activation of this receptor is associated with pro-allergic responses in cells, tissues, and animal models.
GPR99/OXGR1 – ; GPR99, also known as the 2-oxoglutarate receptor 1 (OXGR1) or cysteinyl leukotriene receptor E (CysLTE), is a third CysLTR receptor; unlike CYSLTR1 and CYSLTR2, GPR99 binds and responds to Leukotriene E4 much more strongly than to leukotriene C4 or leukotriene D4. GPR99 is also the receptor for alpha-ketoglutarate, binding and responding to this ligand much more weakly than to any of the three cited leukotrienes. Activation of this receptor by LTC4 is associated with pro-allergic responses in cells and an animal model. The function of GPR99 as a receptor for leukotriene E4 has been confirmed in a mouse model of allergic rhinitis.
GPR17 – ; while one study reported that leukotriene C4, leukotriene D4, and leukotriene E4 bind to and activate GPR17 with equal potencies, many subsequent studies did not confirm this. GPR17, which is mainly expressed in the central nervous system, has also been reported to be the receptor for the purines, Adenosine triphosphate and Uridine diphosphate, and certain glycosylated uridine diphosphate purines (http://www.guidetopharmacology.org/GRAC/ObjectDisplayForward?objectId=88) as well as to be involved in animal models of central nervous system Demyelinating reactions. However, recent reports failed to confirm the latter findings; a consensus of current opinion holds that the true ligand(s) for GPR17 remain to be defined (http://www.guidetopharmacology.org/GRAC/ObjectDisplayForward?objectId=88).
Lipoxin
Lipoxins:
ALX/FPR2 (also termed FPR2, ALX, ALX/FPR, formyl peptide receptor-like 1) – ; receptor for Lipoxin A4 and 15-epi-Lipoxin A4 (or AT-LxA4) eicosanoids but also many other agents including the docosanoids resolvin D1, resolvin D2, and 17R-resolvin D1 (see specialized pro-resolving mediators; oligopeptides such as N-Formylmethionine-leucyl-phenylalanine; and various proteins such as the amino acid 1 to 42 fragment of Amyloid beta, Humanin, and the N-terminally truncated form of the chemotactic chemokine, CCL23 (see FPR2#Ligands and ligand-based disease-related activities). Relative potencies in binding to and activating ALX/FPR are: lipoxin A4=aspirin-triggered lipoxin A4>leukotriene C4=leukotriene D4>>15-deoxy-LXA4>>N-Formylmethionine-leucyl-phenylalanine (http://www.guidetopharmacology.org/GRAC/ObjectDisplayForward?objectId=223}. Activation of ALX/FPR2 by the lipoxins is associated with anti-inflammatory responses by target cells and tissues. Receptors that bind and respond to a wide range of ligands with such seemingly different structural similarities as those of ALX/FPR are often termed promiscuous.
Resolvin E
Resolvin Es:
CMKLR1 – ; CMKLR1, also termed Chemokine like receptor 1 or ChemR23, is the receptor for the eicosanoids resolvin E1 and 18S-resolvin E2 (see specialized pro-resolving mediators) as well as for chemerin, an adipokine protein; relative potencies in binding to and activating CMKLR1 are: resolvin E1>chemerin C-terminal peptide>18R-hydroxy-eicosapentaenoic acid (18R-EPE)>eicosapentaenoic acid (http://www.guidetopharmacology.org/GRAC/ObjectDisplayForward?objectId=79). Apparently, the resolvins activate this receptor in a different manner than chemerin: resolvins act through it to suppress while chemerin acts through it to stimulate pro-inflammatory responses in target cells
Oxoeicosanoid
Oxoeicosanoid:
Oxoeicosanoid (OXE) receptor 1 – ; OXER1 is the receptor for 5-oxo-eicosatetraenoic acid (5-oxo-ETE) as well as certain other eicosanoids and long-chain polyunsaturated fatty acids that possess a 5-hydroxy or 5-oxo residue (see 5-Hydroxyeicosatetraenoic acid); relative potencies of the latter metabolites in binding to and activating OXER1 are: 5-oxoicosatetraenoic acid>5-oxo-15-hydroxy-eioxatetraenoic acid> 5S-hydroperoxy-eicosatetraenoic acid>5-Hydroxyeicosatetraenoic acid; the 5-oxo-eicosatrienoic and 5-oxo-octadecadienoic acid analogs of 5-oxo-ETE are as potent as 5-oxo-ETE in stimulating this receptor (http://www.guidetopharmacology.org/GRAC/ObjectDisplayForward?objectId=271). Activation of OXER1 is associated with pro-inflammatory and pro-allergic responses by cells and tissues as well as with the proliferation of various human cancer cell lines in culture.
Prostanoid
Prostanoids and Prostaglandin receptors
Prostanoids are prostaglandins (PG), thromboxanes (TX), and prostacyclins (PGI). Seven, structurally-related, prostanoid receptors fall into three categories based on the cell activation pathways and activities which they regulate. Relaxant prostanoid receptors (IP, DP1, EP2, and EP4) raise cellular cAMP levels; contractile prostanoid receptors (TP, FP, and EP1) mobilize intracellular calcium; and the inhibitory prostanoid receptor (EP3) lowers cAMP levels. A final prostanoid receptor, DP2, is structurally related to the chemotaxis class of receptors and unlike the other prostanoid receptors mediates eosinophil, basophil, and T helper cell (Th2 type) chemotactic responses. Prostanoids, particularly PGE2 and PGI2, are prominent regulators of inflammation and allergic responses as defined by studies primarily in animal models but also as suggested by studies with human tissues and, in certain cases, human subjects.
PGD2: DP-(PGD2) (PGD2 receptor)
DP1 (PTGDR1) – ; DP1 is a receptor for Prostaglandin D2; relative potencies in binding to and activating DP1 for the following prostanoids are: PGD2>>PGE2>PGF2α>PGI2=TXA2 (http://www.guidetopharmacology.org/GRAC/ObjectDisplayForward?objectId=338). Activation of DP2 is associated with the promotion of inflammatory and the early stage of allergic responses; in limited set of circumstances, however, DP1 activation may ameliorate inflammatory responses.
DP2 (PTGDR2) – ; DP2, also termed CRTH2, is a receptor for prostaglandin D2; relative potencies in binding to and stimulating PD2 are PGD2 >>PGF2α, PGE2>PGI2=TXA2 (http://www.guidetopharmacology.org/GRAC/ObjectDisplayForward?objectId=339&familyId=58&familyType=GPCR). While DP1 activation causes the chemotaxis of pro-inflammatory cells such as basophils, eosinophils, and T cell lymphocytes, its deletion in mice is associated with a reduction in an acute allergic responses in a rodent model. This and other observations suggest that DP2 and DP1 function to counteract each other.
PGE2: EP-(PGE2) (PGE2 receptor)
EP1-(PGE2) (PTGER1) – ; EP1 is a receptor for prostaglandin E2; relative potencies in binding to and stimulating EP1 are PGE2>PGF2α=PGI2>PGD2=TXA2 (http://www.guidetopharmacology.org/GRAC/ObjectDisplayForward?objectId=346&familyId=58&familyType=GPCR). EP1 activation is associated with the promotion of inflammation, particularly in the area of inflammation-based pain perception, and asthma, particularly in the area of airways constriction.
EP2-(PGE2) (PTGER2) – ; EP2 is a receptor for prostaglandin E2; relative potencies in binding to and stimulating EP2 are PGE2>PGF2α=PGI2>PGD2=TXA2 (http://www.guidetopharmacology.org/GRAC/ObjectDisplayForward?objectId=341). EP2 activation is associated with the suppression of inflammation and inflammation-induced pulmonary fibrosis reactions as well as allergic reactions.
EP3-(PGE2) (PTGER3) – ; EP3 is a receptor for prostaglandin E2; relative potencies in binding to and stimulating EP3 are PGE2>PGF2α=PGI2>PGD2+TXA2 (http://www.guidetopharmacology.org/GRAC/ObjectDisplayForward?objectId=342). Activation of EP3 is associated with the suppression of the early and late phases of allergic responses; EP3 activation is also responsible for febrile responses to inflammation.
EP4-(PGE2) (PTGER4) – ; EP4 is a receptor for prostaglandin E2; relative potencies in binding to and stimulating EP4 are PGE2>PGF2α=PGI2>PGD2=TXA2 (http://www.guidetopharmacology.org/GRAC/ObjectDisplayForward?objectId=343). EP4, particularly in association with EP2, activation is critical for the development of arthritis in different animal models.
PGF2α: FP-(PGF2α) (PTGFR) – ; FP is the receptor for prostaglandin F2 alpha; relative potencies in binding to and stimulating FP are PGF2α>PGD2>PGE2>PGI2=thromboxane A2 (http://www.guidetopharmacology.org/GRAC/ObjectDisplayForward?objectId=344). This receptor is the least selective of the prostanoid receptors in that both PGD2 and PGE2 bind to and stimulate it with potencies close to that of PGF2α. FP has two splice variants, FPa and FPb, which differ in the length of their C-terminus tails. PGF2α-induced activation of FP has pro-inflammatory effects as well as roles in ovulation, luteolysis, contraction of uterine smooth muscle, and initiation of parturition. Analogs of PGF2α have been developed for estrus synchronization, abortion in domestic animals, influencing human reproductive function, and reducing intraocular pressure in glaucoma.
PGI2 (prostacyclin): IP-(PGI2) (PTGIR) – ; IP is the receptor for prostacyclin I2; relative potencies in binding to and stimulating IP are: PGI2>>PGD2= PGE2=PGF2α>TXA2 (http://www.guidetopharmacology.org/GRAC/ObjectDisplayForward?objectId=345). Activation of IP is associated with the promotion of capillary permeability in inflammation and allergic responses as well as partial suppression of experimental arthritis in animal models. IP is expressed in at least three alternatively spliced isoforms which differ in the length of their C-terminus and which also activate different cellular signaling pathways and responses.
TXA2 (thromboxane): TP-(TXA2) (TBXA2R) – ; TP is the receptor for thromboxane A2; relative potencies in binding to and stimulating TP are TXA2=PGH2>>PGD2=PGE2=PGF2α=PGI2 (http://www.guidetopharmacology.org/GRAC/ObjectDisplayForward?objectId=346&familyId=58&familyType=GPCR). In addition to PGH2, several isoprostanes have been found to be potent stimulators of and to act in part through TP. The TP receptor is expressed in most human cells types as two alternatively spliced isoforms, TP receptor-α and TP receptor β, which differ in the length of their C-terminus tail; these isoforms communicate with different G proteins, undergo heterodimerization, and thereby result in different changes in intracellular signaling (only the TP receptor α is expressed in mice). Activation of TP by TXA2 or isoprostanes is associated with pro-inflammatory responses in cells, tissues, and animal models. TP activation is also associated with the promotion of platelet aggregation and thereby blood clotting and thrombosis.
References
External links
G protein-coupled receptors | Eicosanoid receptor | [
"Chemistry"
] | 4,062 | [
"G protein-coupled receptors",
"Signal transduction"
] |
12,224,054 | https://en.wikipedia.org/wiki/Osmolyte | Osmolytes are low-molecular-weight organic compounds that influence the properties of biological fluids. Osmolytes are a class of organic molecules that play a significant role in regulating osmotic pressure and maintaining cellular homeostasis in various organisms, particularly in response to environmental stressors. Their primary role is to maintain the integrity of cells by affecting the viscosity, melting point, and ionic strength of the aqueous solution. When a cell swells due to external osmotic pressure, membrane channels open and allow efflux of osmolytes carrying water, restoring normal cell volume.
These molecules are involved in counteracting the effects of osmotic stress, which occurs when there are fluctuations in the concentration of solutes (such as ions and sugars) inside and outside cells. Osmolytes help cells adapt to changing osmotic conditions, thereby ensuring their survival and functionality. Osmolytes also interact with the constituents of the cell, e.g., they influence protein folding. Common osmolytes include amino acids, sugars and polyols, methylamines, methylsulfonium compounds, and urea.
Case studies
Natural osmolytes that can act as osmoprotectants include trimethylamine N-oxide (TMAO), dimethylsulfoniopropionate, sarcosine, betaine, glycerophosphorylcholine, myo-inositol, taurine, glycine, and others. Bacteria accumulate osmolytes for protection against a high osmotic environment. The osmolytes are neutral non-electrolytes, except in bacteria that can tolerate salts. In humans, osmolytes are of particular importance in the renal medulla.
Osmolytes are present in the cells of fish, and function to protect the cells from water pressure. As the osmolyte concentration in fish cells scales linearly with pressure and therefore depth, osmolytes have been used to calculate the maximum depth where a fish can survive. Fish cells reach a maximum concentration of osmolytes at depths of approximately , with no fish ever being observed beyond .
References
Further reading
Diffusion
Solutions | Osmolyte | [
"Physics",
"Chemistry"
] | 459 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion",
"Homogeneous chemical mixtures",
"Solutions"
] |
12,226,979 | https://en.wikipedia.org/wiki/Peano%E2%80%93Russell%20notation | In mathematical logic, Peano–Russell notation was Bertrand Russell's application of Giuseppe Peano's logical notation to the logical notions of Frege and was used in the writing of Principia Mathematica in collaboration with Alfred North Whitehead:
"The notation adopted in the present work is based upon that of Peano, and the following explanations are to some extent modelled on those which he prefixes to his Formulario Mathematico." (Chapter I: Preliminary Explanations of Ideas and Notations, page 4)
Variables
In the notation, variables are ambiguous in denotation, preserve a recognizable identity appearing in various places in logical statements within a given context, and have a range of possible determination between any two variables which is the same or different. When the possible determination is the same for both variables, then one implies the other; otherwise, the possible determination of one given to the other produces a meaningless phrase. The alphabetic symbol set for variables includes the lower and upper case Roman letters as well as many from the Greek alphabet.
Fundamental functions of propositions
The four fundamental functions are the contradictory function, the logical sum, the logical product, and the implicative function.
Contradictory function
The contradictory function applied to a proposition returns its negation.
Logical sum
The logical sum applied to two propositions returns their disjunction.
Logical product
The logical product applied to two propositions returns the truth-value of both propositions being simultaneously true.
Implicative function
The implicative function applied to two ordered propositions returns the truth value of the first implying the second proposition.
More complex functions of propositions
Equivalence is written as , standing for .
Assertion is same as the making of a statement between two full stops.
An asserted proposition is either true or an error on the part of the writer.
Inference is equivalent to the rule modus ponens, where
In addition to the logical product, dots are also used to show groupings of functions of propositions. In the above example, the dot before the final implication function symbol groups all of the previous functions on that line together as the antecedent to the final consequent.
The notation includes definitions as complex functions of propositions, using the equals sign "=" to separate the defined term from its symbolic definition, ending with the letters "Df".
Notes
References
Russell, Bertrand and Alfred North Whitehead (1910). Principia Mathematica Cambridge, England: The University Press.
External links
Proof theory | Peano–Russell notation | [
"Mathematics"
] | 505 | [
"Mathematical logic",
"Proof theory"
] |
12,230,039 | https://en.wikipedia.org/wiki/Isomorphism%20extension%20theorem | In field theory, a branch of mathematics, the isomorphism extension theorem is an important theorem regarding the extension of a field isomorphism to a larger field.
Isomorphism extension theorem
The theorem states that given any field , an algebraic extension field of and an isomorphism mapping onto a field then can be extended to an isomorphism mapping onto an algebraic extension of (a subfield of the algebraic closure of ).
The proof of the isomorphism extension theorem depends on Zorn's lemma.
References
D.J. Lewis, Introduction to algebra, Harper & Row, 1965, Chap.IV.12, p.193.
Field (mathematics)
Theorems in abstract algebra | Isomorphism extension theorem | [
"Mathematics"
] | 137 | [
"Theorems in algebra",
"Theorems in abstract algebra"
] |
12,230,337 | https://en.wikipedia.org/wiki/Rowland%20ring | Rowland's ring (aka Rowland ring) is an experimental arrangement for the measurement of the hysteresis curve of a sample of magnetic material. It was developed by Henry Augustus Rowland.
The geometry of a Rowland's ring is usually a toroid of magnetic material around which is closely wound a magnetization coil consisting of a large number of windings to magnetize the material, and a sampling coil consisting of a smaller number of windings to sample the induced magnetic flux. The electric current flowing in the magnetization coil dictates the magnetic field intensity in the material. The sampling coil produces a voltage proportional to the rate of change of the magnetic field in the material. By measuring the time integral of the voltage in the sampling coil versus the current in the magnetization coil, one obtains the hysteresis curve.
See also
Electromagnetic induction
External links
Photo of a Rowland's ring
References
Paul Lorrain and Dale Corson, "Electromagnetic Fields and Waves, 2nd ed", W.H. Freeman and Company (1970).
Electromagnetism | Rowland ring | [
"Physics",
"Materials_science"
] | 217 | [
"Electromagnetism",
"Materials science stubs",
"Physical phenomena",
"Fundamental interactions",
"Electromagnetism stubs"
] |
12,230,646 | https://en.wikipedia.org/wiki/Comparative%20Biochemistry%20and%20Physiology%20B | Comparative Biochemistry and Physiology Part B: Biochemistry & Molecular Biology is a peer-reviewed scientific journal that covers research in biochemistry, physiology, and molecular biology.
External links
Biochemistry journals
Physiology journals
Elsevier academic journals
Academic journals established in 1971 | Comparative Biochemistry and Physiology B | [
"Chemistry"
] | 47 | [
"Biochemistry stubs",
"Biochemistry journals",
"Biochemistry literature",
"Biochemistry journal stubs"
] |
12,231,320 | https://en.wikipedia.org/wiki/Sarcospan | Sarcospan is a protein that in humans is encoded by the SSPN gene.
Originally identified as Kirsten ras associated gene (KRAG), sarcospan is a 25-kDa transmembrane protein located in the dystrophin-associated protein complex of skeletal muscle cells, where it is most abundant. It contains four transmembrane spanning helices with both N- and C-terminal domains located intracellularly. Loss of SSPN expression occurs in patients with Duchenne muscular dystrophy. Dystrophin is required for proper localization of SSPN. SSPN is also an essential regulator of Akt signaling pathways. Without SSPN, Akt signaling pathways will be hindered and muscle regeneration will not occur.
Function
Sarcospan is a protein that plays a crucial role in muscle health and function. It is part of the dystrophin-associated glycoprotein complex (DGC), which is a protein complex found in muscle cells that helps to maintain the structural integrity of muscle fibers. Sarcospan interacts with other proteins in the DGC, and mutations in the gene that encodes sarcospan can lead to muscular dystrophy, a group of genetic disorders characterized by progressive muscle weakness and degeneration.
Sarcospan has multiple functions within the DGC that contribute to its role in muscle health. The DGC is a complex of proteins that spans the cell membrane of muscle cells and links the extracellular matrix to the intracellular cytoskeleton, providing stability and integrity to the muscle fiber. Sarcospan is one of the components of the DGC and interacts with other proteins in the complex, including dystrophin, syntrophins, and dystroglycans.
One of the key functions of sarcospan is to help stabilize the DGC and promote its proper localization at the muscle cell membrane. Sarcospan interacts with dystroglycans, which are transmembrane proteins that connect the DGC to the extracellular matrix. This interaction helps to anchor the DGC to the muscle cell membrane and contributes to the overall stability of the muscle fiber. Additionally, sarcospan interacts with syntrophins, which are adapter proteins that link the DGC to the actin cytoskeleton inside the muscle cell. This interaction helps to maintain the structural integrity of the muscle fiber and is important for muscle contraction and force generation.
Cell signaling
Sarcospan also plays a role in signaling pathways that are involved in muscle growth and regeneration. Studies have shown that sarcospan can regulate the activity of certain signaling molecules, such as focal adhesion kinase (FAK), which is involved in cell adhesion and migration. Sarcospan has been implicated in the regulation of muscle stem cells, known as satellite cells, which are responsible for muscle regeneration after injury or damage. Sarcospan has been shown to modulate satellite cell activation and migration, suggesting that it may have a role in muscle repair and regeneration processes.
Sarcospan is primarily localized to the muscle cell membrane, specifically at the neuromuscular junction (NMJ) and the sarcolemma, which is the plasma membrane of muscle cells. The NMJ is the specialized synapse between the motor neuron and the muscle fiber, where nerve impulses are transmitted to the muscle to initiate contraction. The DGC, including sarcospan, is enriched at the NMJ, where it plays a critical role in maintaining the integrity of the muscle membrane and ensuring proper neuromuscular signaling.
In addition to the NMJ, sarcospan is also localized along the sarcolemma, which is the continuous plasma membrane that surrounds the entire muscle fiber. Sarcospan is distributed in a striated pattern along the sarcolemma, suggesting that it may have specific roles in different regions of the muscle fiber. The precise localization of sarcospan to the NMJ and the sarcolemma is important for its function in stabilizing the DGC and promoting muscle integrity.
Mutations and diseases
Mutations in the gene that encodes sarcospan have been implicated in the development of muscular dystrophy, which is a group of genetic disorders characterized by progressive muscle weakness and degeneration. Muscular dystrophy is caused by mutations in various genes that are involved in the structure and function of muscle, including dystrophin, which is a key component of the DGC that interacts with sarcospan.
The loss of dystrophin results in muscular dystrophy. SSPN upregulates the levels of Utrophin-glycoprotein complex (UGC) to make up for the loss of dystrophin in the neuromuscular junction. Sarcoglycans bind to SSPN and form the SG-SSPN complex, which interacts with dystroglycans (DG) and Utrophin leading to the formation of the UGC. SSPN regulates the amount of Utrophin produced by the UGC to restore laminin binding due to the absence of dystrophin. If laminin binding is not restored by SSPN, contraction of the membrane is present. In dystrophic mdx mice, SSPN increases levels of Utrophin and restores the levels of laminin binding, reducing the symptoms of muscular dystrophy
Mutations in the gene that encodes sarcospan have been implicated in the development of muscular dystrophy, which is a group of genetic disorders characterized by progressive muscle weakness and degeneration. Muscular dystrophy is caused by mutations in various genes that are involved in the structure and function of muscle, including dystrophin, which is a key component of the DGC that interacts with sarcospan.
Research applications
The study of sarcospan has important research applications that may contribute to the development of therapeutic interventions for muscular dystrophy and other muscle-related disorders.
Therapeutic strategies
The elucidation of the role of sarcospan in muscular dystrophy has led to the exploration of potential therapeutic strategies that target sarcospan or the DGC. For example, approaches aimed at restoring sarcospan expression or function have been investigated as potential therapeutic interventions for muscular dystrophy. Gene therapy techniques, such as viral-mediated gene delivery, have been explored to restore sarcospan expression in muscle cells, with promising results in preclinical studies. Additionally, gene editing technologies, such as CRISPR-Cas9, have been used to correct sarcospan mutations in muscle cells, offering potential gene-based therapeutic approaches for muscular dystrophy.
Drug development
Sarcospan has been considered as a potential target for drug development in the treatment of muscular dystrophy. Small molecule compounds that can modulate sarcospan function or stabilize the DGC have been explored as potential therapeutic agents. For example, studies have shown that targeting specific signaling pathways, such as the FAK pathway, which is regulated by sarcospan, can improve muscle function in animal models of muscular dystrophy. Additionally, compounds that can enhance the stability or localization of the DGC, including sarcospan, have been investigated for their potential to ameliorate muscle membrane fragility and reduce muscle damage in muscular dystrophy.
Biomarker development
Sarcospan has been proposed as a potential biomarker for muscular dystrophy and other muscle-related disorders. Biomarkers are measurable indicators that can provide information about disease status, progression, and response to treatment. Sarcospan levels in blood or other biological samples may reflect the integrity of the DGC and muscle membrane, and changes in sarcospan levels may be indicative of disease progression or response to therapeutic interventions. Development of sarcospan as a biomarker may aid in diagnosis, prognosis, and monitoring of muscular dystrophy and other muscle-related disorders.
Mechanistic studies
Research on sarcospan has provided insights into the molecular mechanisms underlying muscle development, regeneration, and disease. Studies using animal models or cell culture systems have helped to elucidate the role of sarcospan in the stability and function of the DGC, its involvement in signaling pathways, and its contribution
References
Proteins | Sarcospan | [
"Chemistry"
] | 1,712 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
3,210,880 | https://en.wikipedia.org/wiki/David%20Ruelle | David Pierre Ruelle (; born 20 August 1935) is a Belgian and naturalized French mathematical physicist. He has worked on statistical physics and dynamical systems. With Floris Takens, Ruelle coined the term strange attractor, and developed a new theory of turbulence.
Biography
Ruelle studied physics at the Free University of Brussels, obtaining a PhD degree in 1959 under the supervision of Res Jost. He spent two years (1960–1962) at the ETH Zurich, and another two years (1962–1964) at the Institute for Advanced Study in Princeton, New Jersey. In 1964, he became professor at the Institut des Hautes Études Scientifiques in Bures-sur-Yvette, France. Since 2000, he has been an emeritus professor at IHES and distinguished visiting professor at Rutgers University.
David Ruelle made fundamental contributions in various aspects of mathematical physics. In quantum field theory, the most important contribution is the rigorous formulation of scattering processes based on Wightman's axiomatic theory. This approach is known as the Haag–Ruelle scattering theory. Later Ruelle helped to create a rigorous theory of statistical mechanics of equilibrium, that includes the study of the thermodynamic limit, the equivalence of ensembles, and the convergence of Mayer's series. A further result is the Asano-Ruelle lemma, which allows the study of the zeros of certain polynomial functions that are recurrent in statistical mechanics.
The study of infinite systems led to the local definition of Gibbs states or to the global definition of equilibrium states. Ruelle demonstrated with Roland L. Dobrushin and Oscar E. Lanford that translationally invariant Gibbs states are precisely the equilibrium states.
Together with Floris Takens, he proposed the description of hydrodynamic turbulence based on strange attractors with chaotic properties of hyperbolic dynamics.
Honors and awards
Since 1985 David Ruelle has been a member of the French Academy of Sciences and in 1988 he was Josiah Willard Gibbs Lecturer in Atlanta, Georgia. Since 1992 he has been an international honorary member of the American Academy of Arts and Sciences and since 1993 ordinary member of the Academia Europaea. Since 2002 he has been an international member of the United States National Academy of Sciences and since 2003 a foreign member of the Accademia Nazionale dei Lincei. Since 2012 he has been a fellow of the American Mathematical Society.
In 1985 David Ruelle was awarded the Dannie Heineman Prize for Mathematical Physics and in 1986 he received the Boltzmann Medal for his outstanding contributions to statistical mechanics. In 1993 he won the Holweck Prize and in 2004 he received the Matteucci Medal. In 2006 he was awarded the Henri Poincaré Prize and in 2014 he was honored with the prestigious Max Planck Medal for his achievements in theoretical physics. In 2022, Ruelle was awarded the ICTP's Dirac Medal for Mathematical Physics, along with Elliott H. Lieb and Joel Lebowitz, "for groundbreaking and mathematically rigorous contributions to the understanding of the statistical mechanics of classical and quantum physical systems".
Selected publications
; hbk
1st edition 1969
1st edition 1978
1989 edition
1989 1st edition
See also
Axiomatic quantum field theory
Chaos theory
Dynamical systems theory
Dobrushin–Lanford–Ruelle equations
Fluid mechanics
Haag–Ruelle scattering theory
Ruelle zeta-function
Sinai–Ruelle–Bowen measure
Statistical physics
Strange attractor
Transfer operator
References
External links
.
.
1935 births
Living people
20th-century French mathematicians
21st-century French mathematicians
Belgian mathematicians
Belgian physicists
Chaos theorists
Free University of Brussels (1834–1969) alumni
Members of the French Academy of Sciences
Foreign associates of the National Academy of Sciences
Recipients of the Great Cross of the National Order of Scientific Merit (Brazil)
Fellows of the American Mathematical Society
Academic staff of ETH Zurich
Institute for Advanced Study visiting scholars
Rutgers University faculty
Recipients of the Matteucci Medal
Belgian emigrants to France
Winners of the Max Planck Medal
Mathematical physicists
Statistical physicists
Members of Academia Europaea | David Ruelle | [
"Physics"
] | 810 | [
"Statistical physicists",
"Statistical mechanics"
] |
3,211,121 | https://en.wikipedia.org/wiki/Electromagnetic%20reverberation%20chamber | An electromagnetic reverberation chamber (also known as a reverb chamber (RVC) or mode-stirred chamber (MSC)) is an environment for electromagnetic compatibility (EMC) testing and other electromagnetic investigations. Electromagnetic reverberation chambers have been introduced first by H.A. Mendes in 1968. A reverberation chamber is screened room with a minimum of absorption of electromagnetic energy. Due to the low absorption, very high field strength can be achieved with moderate input power. A reverberation chamber is a cavity resonator with a high Q factor. Thus, the spatial distribution of the electrical and magnetic field strengths is strongly inhomogeneous (standing waves). To reduce this inhomogeneity, one or more tuners (stirrers) are used. A tuner is a construction with large metallic reflectors that can be moved to different orientations in order to achieve different boundary conditions. The Lowest Usable Frequency (LUF) of a reverberation chamber depends on the size of the chamber and the design of the tuner. Small chambers have a higher LUF than large chambers.
The concept of a reverberation chamber is comparable to a microwave oven.
Glossary/notation
Preface
The notation is mainly the same as in the IEC standard 61000-4-21. For statistic quantities like mean and maximal values, a more explicit notation is used in order to emphasize the used domain. Here, spatial domain (subscript ) means that quantities are taken for different chamber positions, and ensemble domain (subscript ) refers to different boundary or excitation conditions (e.g. tuner positions).
General
: Vector of the electric field.
: Vector of the magnetic field.
: The total electrical or magnetical field strength, i.e. the magnitude of the field vector.
: Field strength (magnitude) of one rectangular component of the electrical or magnetical field vector.
: Characteristic impedance of the free space
: Efficiency of the transmitting antenna
: Efficiency of the receiving antenna
: Power of the forward and backward running waves.
: The quality factor.
Statistics
: spatial mean of for objects (positions in space).
: ensemble mean of for objects (boundaries, i.e. tuner positions).
: equivalent to . Thist is the expected value in statistics.
: spatial maximum of for objects (positions in space).
: ensemble maximum of for objects (boundaries, i.e. tuner positions).
: equivalent to .
: max to mean ratio in the spatial domain.
: max to mean ratio in the ensemble domain.
Theory
Cavity resonator
A reverberation chamber is cavity resonator—usually a screened room—that is operated in the overmoded region. To understand what that means we have to investigate cavity resonators briefly.
For rectangular cavities, the resonance frequencies (or eigenfrequencies, or
natural frequencies) are given by
where is the speed of light, , and are the cavity's length, width and height, and , , are non-negative integers (at most one of those can be zero).
With that equation, the number of modes with an eigenfrequency less than a given limit , , can be counted. This results in a stepwise function. In principle, two modes—a transversal electric mode and a transversal magnetic mode —exist for each eigenfrequency.
The fields at the chamber position are given by
for the TM modes ()
for the TE modes ()
Due to the boundary conditions for the E- and H field, some modes do not exist. The restrictions are:
For TM modes: m and n can not be zero, p can be zero
For TE modes: m or n can be zero (but not both can be zero), p can not be zero
A smooth approximation of , , is given by
The leading term is proportional to the chamber volume and to the third power of the frequency. This term is identical to Weyl's formula.
Based on the mode density is given by
An important quantity is the number of modes in a certain frequency interval , , that is given by
Quality factor
The Quality Factor (or Q Factor) is an important quantity for all resonant systems. Generally, the Q factor is defined by
where the maximum and the average are taken over one cycle, and is the angular frequency.
The factor Q of the TE and TM modes can be calculated from the fields. The stored energy is given by
The loss occurs in the metallic walls. If the wall's electrical conductivity is and its permeability is , the surface resistance is
where is the skin depth of the wall material.
The losses are calculated according to
For a rectangular cavity follows
for TE modes:
for TM modes:
Using the Q values of the individual modes, an averaged Composite Quality Factor can be derived:
includes only losses due to the finite conductivity of the chamber walls and is therefore an upper limit. Other losses are dielectric losses e.g. in antenna support structures, losses due to wall coatings, and leakage losses. For the lower frequency range the dominant loss is due to the antenna used to couple energy to the room (transmitting antenna, Tx) and to monitor the fields in the chamber (receiving antenna, Rx). This antenna loss is given by
where is the number of antenna in the chamber.
The quality factor including all losses is the harmonic sum of the factors for all single loss processes:
Resulting from the finite quality factor the eigenmodes are broaden in frequency, i.e. a mode can be excited even if the operating frequency does not exactly match the eigenfrequency. Therefore, more eigenmodes are exited for a given frequency at the same time.
The Q-bandwidth is a measure of the frequency bandwidth over which the modes in a reverberation chamber are
correlated. The of a reverberation chamber can be calculated using the following:
Using the formula the number of modes excited within results to
Related to the chamber quality factor is the chamber time constant by
That is the time constant of the free energy relaxation of the chamber's field (exponential decay) if the input power is switched off.
See also
Anechoic chamber
Reverberation room
Echo chamber
Integrating sphere
GTEM cell
Notes
References
Crawford, M.L.; Koepke, G.H.: Design, Evaluation, and Use of a Reverberation Chamber for Performing Electromagnetic Susceptibility/Vulnerability Measurements, NBS Technical Note 1092, National Bureau od Standards, Boulder, CO, April, 1986.
Ladbury, J.M.; Koepke, G.H.: Reverberation chamber relationships: corrections and improvements or three wrongs can (almost) make a right, Electromagnetic Compatibility, 1999 IEEE International Symposium on, Volume 1, 1–6, 2–6 August 1999.
Electromagnetic radiation | Electromagnetic reverberation chamber | [
"Physics"
] | 1,422 | [
"Electromagnetic radiation",
"Physical phenomena",
"Radiation"
] |
3,211,228 | https://en.wikipedia.org/wiki/Jacobsen%20epoxidation | The Jacobsen epoxidation, sometimes also referred to as Jacobsen-Katsuki epoxidation is a chemical reaction which allows enantioselective epoxidation of unfunctionalized alkyl- and aryl- substituted alkenes. It is complementary to the Sharpless epoxidation (used to form epoxides from the double bond in allylic alcohols). The Jacobsen epoxidation gains its stereoselectivity from a C2 symmetric manganese(III) salen-like ligand, which is used in catalytic amounts. The manganese atom transfers an oxygen atom from chlorine bleach or similar oxidant. The reaction takes its name from its inventor, Eric Jacobsen, with Tsutomu Katsuki sometimes being included. Chiral-directing catalysts are useful to organic chemists trying to control the stereochemistry of biologically active compounds and develop enantiopure drugs.
Several improved procedures have been developed.
A general reaction scheme follows:
History
In the early 1990s, Jacobsen and Katsuki independently released their initial findings about their catalysts for the enantioselective epoxidation of isolated alkenes. In 1991, Jacobsen published work where he attempted to perfect the catalyst. He was able to obtain ee values above 90% for a variety of ligands. Also, the amount of catalyst used was no more than 15% of the amount of alkene used in the reaction.
General features
The degree of enantioselectivity depends on numerous factors, namely the structure of the alkene, the nature of the axial donor ligand on the active oxomanganese species and the reaction temperature. Cyclic and acyclic cis-1,2-disubstituted alkenes are epoxidized with almost 100% enantioselectivity whereas trans-1,2-disubstituted alkenes are poor substrates for Jacobsen's catalysts but yet give higher enantioselectivities when Katsuki's catalysts are used. Furthermore, the enantioselective epoxidation of conjugated dienes is much higher than that of the nonconjugated dienes.
The enantioselectivity is explained by either a "top-on" approach (Jacobsen) or by a "side-on" approach (Katsuki) of the alkene.
Mechanism
The mechanism of the Jacobsen–Katsuki epoxidation is not fully understood, but most likely a manganese(V)-species (similar to the ferryl intermediate of Cytochrome P450) is the reactive intermediate which is formed upon the oxidation of the Mn(III)-salen complex. There are three major pathways. The concerted pathway, the metalla oxetane pathway and the radical pathway. The most accepted mechanism is the concerted pathway mechanism. After the formation of the Mn(V) complex, the catalyst is activated and therefore can form epoxides with alkenes. The alkene comes in from the "top-on" approach (above the plane of the catalyst) and the oxygen atom now is bonded to the two carbon atoms (previously C=C bond) and is still bonded to the manganese metal. Then, the Mn–O bond breaks and the epoxide is formed. The Mn(III)-salen complex is regenerated, which can then be oxidized again to form the Mn(V) complex.
The radical intermediate accounts for the formation of mixed epoxides when conjugated dienes are used as substrates.
Dimethyldioxirane]] can be used as a source of O atomes.DMD of a chiral metal catalyst followed by epoxidation, or (2) epoxidation by chiral dioxiranes, which are generated in situ from a catalytic amount of ketone and a stoichiometric amount of a terminal oxidant). Mn-salen complexes have been used with success to accomplish the first strategy.
References
Epoxidation reactions
Organic oxidation reactions
Name reactions | Jacobsen epoxidation | [
"Chemistry"
] | 860 | [
"Name reactions",
"Organic oxidation reactions",
"Ring forming reactions",
"Organic reactions"
] |
3,212,091 | https://en.wikipedia.org/wiki/Local%20cohomology | In algebraic geometry, local cohomology is an algebraic analogue of relative cohomology. Alexander Grothendieck introduced it in seminars in Harvard in 1961 written up by , and in 1961-2 at IHES written up as SGA2 - , republished as . Given a function (more generally, a section of a quasicoherent sheaf) defined on an open subset of an algebraic variety (or scheme), local cohomology measures the obstruction to extending that function to a larger domain. The rational function , for example, is defined only on the complement of on the affine line over a field , and cannot be extended to a function on the entire space. The local cohomology module (where is the coordinate ring of ) detects this in the nonvanishing of a cohomology class . In a similar manner, is defined away from the and axes in the affine plane, but cannot be extended to either the complement of the -axis or the complement of the -axis alone (nor can it be expressed as a sum of such functions); this obstruction corresponds precisely to a nonzero class in the local cohomology module .
Outside of algebraic geometry, local cohomology has found applications in commutative algebra, combinatorics, and certain kinds of partial differential equations.
Definition
In the most general geometric form of the theory, sections are considered of a sheaf of abelian groups, on a topological space , with support in a closed subset , The derived functors of form local cohomology groups
In the theory's algebraic form, the space X is the spectrum Spec(R) of a commutative ring R (assumed to be Noetherian throughout this article) and the sheaf F is the quasicoherent sheaf associated to an R-module M, denoted by . The closed subscheme Y is defined by an ideal I. In this situation, the functor ΓY(F) corresponds to the I-torsion functor, a union of annihilators
i.e., the elements of M which are annihilated by some power of I. As a right derived functor, the ith local cohomology module with respect to I is the ith cohomology group of the chain complex obtained from taking the I-torsion part of an injective resolution of the module . Because consists of R-modules and R-module homomorphisms, the local cohomology groups each have the natural structure of an R-module.
The I-torsion part may alternatively be described as
and for this reason, the local cohomology of an R-module M agrees with a direct limit of Ext modules,
It follows from either of these definitions that would be unchanged if were replaced by another ideal having the same radical. It also follows that local cohomology does not depend on any choice of generators for I, a fact which becomes relevant in the following definition involving the Čech complex.
Using Koszul and Čech complexes
The derived functor definition of local cohomology requires an injective resolution of the module , which can make it inaccessible for use in explicit computations. The Čech complex is seen as more practical in certain contexts. , for example, state that they "essentially ignore" the "problem of actually producing any one of these [injective] kinds of resolutions for a given module" prior to presenting the Čech complex definition of local cohomology, and describes Čech cohomology as "giv[ing] a practical method for computing cohomology of quasi-coherent sheaves on a scheme." and as being "well suited for computations."
The Čech complex can be defined as a colimit of Koszul complexes where generate . The local cohomology modules can be described as:
Koszul complexes have the property that multiplication by induces a chain complex morphism that is homotopic to zero, meaning is annihilated by the . A non-zero map in the colimit of the sets contains maps from the all but finitely many Koszul complexes, and which are not annihilated by some element in the ideal.
This colimit of Koszul complexes is isomorphic to the Čech complex, denoted , below.
where the ith local cohomology module of with respect to is isomorphic to the ith cohomology group of the above chain complex,
The broader issue of computing local cohomology modules (in characteristic zero) is discussed in and .
Basic properties
Since local cohomology is defined as derived functor, for any short exact sequence of R-modules , there is, by definition, a natural long exact sequence in local cohomology
There is also a long exact sequence of sheaf cohomology linking the ordinary sheaf cohomology of X and of the open set U = X \Y, with the local cohomology modules. For a quasicoherent sheaf F defined on X, this has the form
In the setting where X is an affine scheme and Y is the vanishing set of an ideal I, the cohomology groups vanish for . If , this leads to an exact sequence
where the middle map is the restriction of sections. The target of this restriction map is also referred to as the ideal transform. For n ≥ 1, there are isomorphisms
Because of the above isomorphism with sheaf cohomology, local cohomology can be used to express a number of meaningful topological constructions on the scheme in purely algebraic terms. For example, there is a natural analogue in local cohomology of the Mayer–Vietoris sequence with respect to a pair of open sets U and V in X, given by the complements of the closed subschemes corresponding to a pair of ideal I and J, respectively. This sequence has the form
for any -module .
The vanishing of local cohomology can be used to bound the least number of equations (referred to as the arithmetic rank) needed to (set theoretically) define the algebraic set in . If has the same radical as , and is generated by elements, then the Čech complex on the generators of has no terms in degree . The least number of generators among all ideals such that is the arithmetic rank of , denoted . Since the local cohomology with respect to may be computed using any such ideal, it follows that for .
Graded local cohomology and projective geometry
When is graded by , is generated by homogeneous elements, and is a graded module, there is a natural grading on the local cohomology module that is compatible with the gradings of and . All of the basic properties of local cohomology expressed in this article are compatible with the graded structure. If is finitely generated and is the ideal generated by the elements of having positive degree, then the graded components are finitely generated over and vanish for sufficiently large .
The case where is the ideal generated by all elements of positive degree (sometimes called the irrelevant ideal) is particularly special, due to its relationship with projective geometry. In this case, there is an isomorphism
where is the projective scheme associated to , and denotes the Serre twist. This isomorphism is graded, giving
in all degrees .
This isomorphism relates local cohomology with the global cohomology of projective schemes. For example, the Castelnuovo–Mumford regularity can be formulated using local cohomology as
where denotes the highest degree such that . Local cohomology can be used to prove certain upper bound results concerning the regularity.
Examples
Top local cohomology
Using the Čech complex, if the local cohomology module is generated over by the images of the formal fractions
for and . This fraction corresponds to a nonzero element of if and only if there is no such that . For example, if , then
If is a field and is a polynomial ring over in variables, then the local cohomology module may be regarded as a vector space over with basis given by (the Čech cohomology classes of) the inverse monomials for . As an -module, multiplication by lowers by 1, subject to the condition Because the powers cannot be increased by multiplying with elements of , the module is not finitely generated.
Examples of H1
If is known (where ), the module can sometimes be computed explicitly using the sequence
In the following examples, is any field.
If and , then and as a vector space over , the first local cohomology module is , a 1-dimensional vector space generated by .
If and , then and , so is an infinite-dimensional vector space with basis
Relation to invariants of modules
The dimension dimR(M) of a module (defined as the Krull dimension of its support) provides an upper bound for local cohomology modules:
If R is local and M finitely generated, then this bound is sharp, i.e., .
The depth (defined as the maximal length of a regular M-sequence; also referred to as the grade of M) provides a sharp lower bound, i.e., it is the smallest integer n such that
These two bounds together yield a characterisation of Cohen–Macaulay modules over local rings: they are precisely those modules where vanishes for all but one n.
Local duality
The local duality theorem is a local analogue of Serre duality. For a Cohen-Macaulay local ring of dimension that is a homomorphic image of a Gorenstein local ring (for example, if is complete), it states that the natural pairing
is a perfect pairing, where is a dualizing module for . In terms of the Matlis duality functor , the local duality theorem may be expressed as the following isomorphism.
The statement is simpler when , which is equivalent to the hypothesis that is Gorenstein. This is the case, for example, if is regular.
Applications
The initial applications were to analogues of the Lefschetz hyperplane theorems. In general such theorems state that homology or cohomology is supported on a hyperplane section of an algebraic variety, except for some 'loss' that can be controlled. These results applied to the algebraic fundamental group and to the Picard group.
Another type of application are connectedness theorems such as Grothendieck's connectedness theorem (a local analogue of the Bertini theorem) or the Fulton–Hansen connectedness theorem due to and . The latter asserts that for two projective varieties V and W in Pr over an algebraically closed field, the connectedness dimension of Z = V ∩ W (i.e., the minimal dimension of a closed subset T of Z that has to be removed from Z so that the complement Z \ T is disconnected) is bound by
c(Z) ≥ dim V + dim W − r − 1.
For example, Z is connected if dim V + dim W > r.
In polyhedral geometry, a key ingredient of Stanley’s 1975 proof of the simplicial form of McMullen’s Upper bound theorem involves showing that the Stanley-Reisner ring of the corresponding simplicial complex is Cohen-Macaulay, and local cohomology is an important tool in this computation, via Hochster’s formula.
See also
Local homology - gives topological analogue and computation of local homology of the cone of a space
Faltings' annihilator theorem
Notes
Introductory Reference
Huneke, Craig; Taylor, Amelia, Lectures on Local Cohomology
References
Book review by Hartshorne
Sheaf theory
Topological methods of algebraic geometry
Cohomology theories
Commutative algebra
Duality theories | Local cohomology | [
"Mathematics"
] | 2,383 | [
"Mathematical structures",
"Fields of abstract algebra",
"Sheaf theory",
"Category theory",
"Duality theories",
"Geometry",
"Topology",
"Commutative algebra"
] |
3,212,386 | https://en.wikipedia.org/wiki/Active%20living | Active living is a lifestyle that integrates physical activity into everyday routines, such as walking to the store or biking to work. Active living is not a formalized exercise program or routine, but instead means to incorporate physical activity, which is defined as any form of movement, into everyday life. Active living brings together urban planners, architects, transportation engineers, public health professionals, activists and other professionals to build places that encourage active living and physical activity. One example includes efforts to build sidewalks, crosswalks, pedestrian crossing signals, and other ways for children to walk safely to and from school, as seen in the Safe Routes to School program. Recreational opportunities (parks, fitness centres etc.) close to the home or workplace, walking trails, and bike lanes for transportation also contribute to a more active lifestyle. Active living includes any physical activity or recreation activity and contributes to a healthier lifestyle. Furthermore, active living addresses health concerns, such as obesity and chronic disease, by helping people have a physically active lifestyle. Communities that support active living gain health benefits, economic advantages, and improved quality of life.
For achieving active living, people need at least 150 minutes of moderate physical activity or 75 minutes of strong physical activity every week.
History
Active living is a growing field that emerged from the early work of the Centers for Disease Control and Prevention (CDC) with the release of the Surgeon's General Report on Physical Activity and Health in 1996. In 1997, the CDC began the development of an initiative called Active Community Environments (ACEs) coordinated by Rich Killingsworth (the founding director of active living by Design ) and Tom Schmid, a senior health scientist. The main programming thrust of ACEs was an emerging initiative called Safe Routes to School that was catalysed by a program designed by Rich Killingsworth and Jessica Shisler at CDC called KidsWalk-to-School. This program provided much-needed attention to the connections of the built environment and health, especially obesity and physical inactivity. In 2000, Robert Wood Johnson Foundation formally launched their active living initiative. Led by Karen Gerlach, Marla Hollander, Kate Kraft and Tracy Orleans, this national effort comprised five national programs - Active Living by Design, Active Living Research, Active Living Leadership, Active Living Network and Active for Life. The goals of these programs was multifaceted and included; building the research base, establishing best practices and community models, supporting leadership efforts and connecting multi-sectoral professionals. The overarching goal to develop an understanding of how the built environment impacted physical activity and what could be done to increase physical activity.
Benefits
There are many health related benefits to being physically active and living an active life. Active living can help to reduce the risk of chronic diseases, improve overall health and well-being, reduce stress levels, minimize health related medical costs, help maintain a healthy weight, assist in proper balance and posture and the maintenance of healthy bones and strong muscles. Active living can also improve sleeping patterns and aid in the prevention of risk factors for heart disease such as blood cholesterol levels, diabetes and hypertension.
Running can reduce the level of mortality from many diseases by 27%.
Types of physical activity
There are four types of physical exercises that medical professionals recommend in order to improve and maintain physical abilities: endurance, flexibility, balance, and strength activities.
Endurance activities increase your heart rate and strengthen your heart and lungs. Examples include dancing, skating, climbing stairs, cycling, swimming and brisk walking.
Flexibility activities improve your body's ability to move and assist in keeping your muscles and joints relaxed. Examples include yard work, vacuuming, golf, and stretching - when you wake up, before you exercise and after to prevent injury.
Balance activities reduce the risk of falling and focuses primarily on lower-body strength. Examples include standing up after being seated, Tai Chi, and standing on a single foot.
Strength activities create and maintain muscle, while also keeping bones strong. Examples include raking leaves, carrying groceries, climbing stairs, lifting free weights, and doing push-ups.
Endurance, flexibility, balance, and strength activities can be incorporated into daily routines and promote active living. For example, activities such as household chores and taking the stairs can fit into more than one of the above categories.
Recommendations
In Canada, the Public Health Agency of Canada supported the Canadian Society for Exercise Physiology (CSEP) to review the Canada's Physical Activity Guides, which were updated and replaced with the Get Active Tip Sheets. The Get Active Tip Sheets are broken down into 4 age categories (5–11, 12–17, 18–64, and 65 & older).
The Get Active Tip Sheets recommend that children aged 5–11 and youth aged 12–17 should participate in at least 60 minutes of moderate to vigorous physical activity each day. The recommendation for adults 18–64 and for older adults 65 years and older is at least 2.5 hours of moderate to vigorous physical activity per week. These minutes do not all need to be done at the same time, but the recommendation is a minimum of 10 minutes at a time.
Initiatives
In Canada, there are many active living initiatives currently in place. One of the most well-known programs is the ParticipACTION program, which aims to encourage Canadians to move more and increase their physical activity levels. Their mission statement is "ParticipACTION is the national voice of physical activity and sport participation in Canada. Through leadership in communications, capacity building and knowledge exchange, we inspire and support Canadians to move more." Since the 1970s, ParticipACTION has been motivating Canadians to live actively and participate in sports.
See also
- automobile oriented transportation
Basal metabolic rate - the rate at which the body uses energy while at rest to maintain vital functions such as breathing and keeping warm
- transport on cycle
National Physical Activity Guidelines
Sedentary lifestyle - a lifestyle with a lot of sitting and lying down, with very little to no exercise
Urban vitality - the extent to which a place feels alive or lively
References
Urban planning
Health promotion
Physical exercise
Health and transport | Active living | [
"Engineering"
] | 1,220 | [
"Urban planning",
"Architecture"
] |
3,212,696 | https://en.wikipedia.org/wiki/Comparison%20of%20Intel%20processors | , the x86 architecture is used in most high end compute-intensive computers, including cloud computing, servers, workstations, and many less powerful computers, including personal computer desktops and laptops. The ARM architecture is used in most other product categories, especially high-volume battery powered mobile devices such as smartphones and tablet computers.
Some Xeon Phi processors support four-way hyper-threading, effectively quadrupling the number of threads. Before the Coffee Lake architecture, most Xeon and all desktop and mobile Core i3 and i7 supported hyper-threading while only dual-core mobile i5's supported it. Post Coffee Lake, increased core counts meant hyper-threading is not needed for Core i3, as it then replaced the i5 with four physical cores on the desktop platform. Core i7, on the desktop platform no longer supports hyper-threading; instead, now higher-performing core i9s will support hyper-threading on both mobile and desktop platforms. Before 2007 and post-Kaby Lake, some Intel Pentium and Intel Atom (e.g. N270, N450) processors support hyper-threading. Celeron processors never supported it.
Intel processors table
See also
Intel Corporation
List of Intel processors
List of Intel Atom processors
List of Intel Itanium processors
List of Intel Celeron processors
List of Intel Pentium processors
List of Intel Pentium Pro processors
List of Intel Pentium II processors
List of Intel Pentium III processors
List of Intel Pentium 4 processors
List of Intel Pentium D processors
List of Intel Pentium M processors
List of Intel Xeon processors
List of Intel Core processors
List of Intel Core 2 processors
List of Intel Core i3 processors
List of Intel Core i5 processors
List of Intel Core i7 processors
List of Intel Core i9 processors
List of Intel CPU microarchitectures
List of AMD processors
List of AMD CPU microarchitectures
Table of AMD processors
List of AMD graphics processing units
List of Intel graphics processing units
List of Nvidia graphics processing units
External links
Intel - Intel Source for Specification of Intel Processor
Comparison Charts for Intel Core Desktop Processor Family
Intel - Microprocessor Quick Reference Guide
References
Intel
Comparison
Intel processors | Comparison of Intel processors | [
"Technology"
] | 456 | [
"Computing comparisons"
] |
3,215,301 | https://en.wikipedia.org/wiki/Surface%20states | Surface states are electronic states found at the surface of materials. They are formed due to the sharp transition from solid material that ends with a surface and are found only at the atom layers closest to the surface. The termination of a material with a surface leads to a change of the electronic band structure from the bulk material to the vacuum. In the weakened potential at the surface, new electronic states can be formed, so called surface states.
Origin at condensed matter interfaces
As stated by Bloch's theorem, eigenstates of the single-electron Schrödinger equation with a perfectly periodic potential, a crystal, are Bloch waves
Here is a function with the same periodicity as the crystal, n is the band index and k is the wave number. The allowed wave numbers for a given potential are found by applying the usual Born–von Karman cyclic boundary conditions. The termination of a crystal, i.e. the formation of a surface, obviously causes deviation from perfect periodicity. Consequently, if the cyclic boundary conditions are abandoned in the direction normal to the surface the behavior of electrons will deviate from the behavior in the bulk and some modifications of the electronic structure has to be expected.
A simplified model of the crystal potential in one dimension can be sketched as shown in Figure 1. In the crystal, the potential has the periodicity, a, of the lattice while close to the surface it has to somehow attain the value of the vacuum level. The step potential (solid line) shown in Figure 1 is an oversimplification which is mostly convenient for simple model calculations. At a real surface the potential is influenced by image charges and the formation of surface dipoles and it rather looks as indicated by the dashed line.
Given the potential in Figure 1, it can be shown that the one-dimensional single-electron Schrödinger equation gives two qualitatively different types of solutions.
The first type of states (see figure 2) extends into the crystal and has Bloch character there. These type of solutions correspond to bulk states which terminate in an exponentially decaying tail reaching into the vacuum.
The second type of states (see figure 3) decays exponentially both into the vacuum and the bulk crystal. These type of solutions correspond to surface states with wave functions localized close to the crystal surface.
The first type of solution can be obtained for both metals and semiconductors. In semiconductors though, the associated eigenenergies have to belong to one of the allowed energy bands. The second type of solution exists in forbidden energy gap of semiconductors as well as in local gaps of the projected band structure of metals. It can be shown that the energies of these states all lie within the band gap. As a consequence, in the crystal these states are characterized by an imaginary wavenumber leading to an exponential decay into the bulk.
Shockley states and Tamm states
In the discussion of surface states, one generally distinguishes between Shockley states and Tamm states, named after the American physicist William Shockley and the Russian physicist Igor Tamm. There is no strict physical distinction between the two types of states, but the qualitative character and the mathematical approach used in describing them is different.
Historically, surface states that arise as solutions to the Schrödinger equation in the framework of the nearly free electron approximation for clean and ideal surfaces, are called Shockley states. Shockley states are thus states that arise due to the change in the electron potential associated solely with the crystal termination. This approach is suited to describe normal metals and some narrow gap semiconductors. Figure 3 shows an example of a Shockley state, derived using the nearly free electron approximation. Within the crystal, Shockley states resemble exponentially-decaying Bloch waves.
Surface states that are calculated in the framework of a tight-binding model are often called Tamm states. In the tight binding approach, the electronic wave functions are usually expressed as linear combinations of atomic orbitals (LCAO). In contrast to the nearly free electron model used to describe the Shockley states, the Tamm states are suitable to describe also transition metals and wide gap semiconductors. Qualitatively, Tamm states resemble localized atomic or molecular orbitals at the surface.
Topological surface states
All materials can be classified by a single number, a topological invariant; this is constructed out of the bulk electronic wave functions, which are integrated in over the Brillouin zone, in a similar way that the genus is calculated in geometric topology. In certain materials the topological invariant can be changed when certain bulk energy bands invert due to strong spin-orbital coupling. At the interface between an insulator with non-trivial topology, a so-called topological insulator, and one with a trivial topology, the interface must become metallic. More over, the surface state must have linear Dirac-like dispersion with a crossing point which is protected by time reversal symmetry. Such a state is predicted to be robust under disorder, and therefore cannot be easily localized.
Shockley states
Surface states in metals
A simple model for the derivation of the basic properties of states at a metal surface is a semi-infinite periodic chain of identical atoms. In this model, the termination of the chain represents the surface, where the potential attains the value V0 of the vacuum in the form of a step function, figure 1. Within the crystal the potential is assumed periodic with the periodicity a of the lattice.
The Shockley states are then found as solutions to the one-dimensional single electron Schrödinger equation
with the periodic potential
where l is an integer, and P is the normalization factor.
The solution must be obtained independently for the two domains z<0 and z>0, where at the domain boundary (z=0) the usual conditions on continuity of the wave function and its derivatives are applied. Since the potential is periodic deep inside the crystal, the electronic wave functions must be Bloch waves here. The solution in the crystal is then a linear combination of an incoming wave and a wave reflected from the surface. For z>0 the solution will be required to decrease exponentially into the vacuum
The wave function for a state at a metal surface is qualitatively shown in figure 2. It is an extended Bloch wave within the crystal with an exponentially decaying tail outside the surface. The consequence of the tail is a deficiency of negative charge density just inside the crystal and an increased negative charge density just outside the surface, leading to the formation of a dipole double layer. The dipole perturbs the potential at the surface leading, for example, to a change of the metal work function.
Surface states in semiconductors
The nearly free electron approximation can be used to derive the basic properties of surface states for narrow gap semiconductors. The semi-infinite linear chain model is also useful in this case. However, now the potential along the atomic chain is assumed to vary as a cosine function
whereas at the surface the potential is modeled as a step function of height V0.
The solutions to the Schrödinger equation must be obtained separately for the two domains z < 0 and z > 0. In the sense of the nearly free electron approximation, the solutions obtained for z < 0 will have plane wave character for wave vectors away from the Brillouin zone boundary , where the dispersion relation will be parabolic, as shown in figure 4.
At the Brillouin zone boundaries, Bragg reflection occurs resulting in a standing wave consisting of a wave with wave vector and wave vector .
Here is a lattice vector of the reciprocal lattice (see figure 4).
Since the solutions of interest are close to the Brillouin zone boundary, we set , where κ is a small quantity. The arbitrary constants A,B are found by substitution into the Schrödinger equation. This leads to the following eigenvalues
demonstrating the band splitting at the edges of the Brillouin zone, where the width of the forbidden gap is given by 2V. The electronic wave functions deep inside the crystal, attributed to the different bands are given by
Where C is a normalization constant.
Near the surface at z = 0,
the bulk solution has to be fitted to an exponentially decaying solution, which is compatible with the constant potential V0.
It can be shown that the matching conditions can be fulfilled for every possible energy eigenvalue which lies in the allowed band. As in the case for metals, this type of solution represents standing Bloch waves extending into the crystal which spill over into the vacuum at the surface. A qualitative plot of the wave function is shown in figure 2.
If imaginary values of κ are considered, i.e. κ = - i·q for z ≤ 0 and one defines
one obtains solutions with a decaying amplitude into the crystal
The energy eigenvalues are given by
E is real for large negative z, as required. Also in the range all energies of the surface states fall into the forbidden gap. The complete solution is again found by matching the bulk solution to the exponentially decaying vacuum solution. The result is a state localized at the surface decaying both into the crystal and the vacuum. A qualitative plot is shown in figure 3.
Surface states of a three-dimensional crystal
The results for surface states of a monatomic linear chain can readily be generalized to the case of a three-dimensional crystal. Because of the two-dimensional periodicity of the surface lattice, Bloch's theorem must hold for translations parallel to the surface. As a result, the surface states can be written as the product of a Bloch waves with k-values parallel to the surface and a function representing a one-dimensional surface state
The energy of this state is increased by a term so that we have
where m* is the effective mass of the electron. The matching conditions at the crystal surface, i.e. at z=0, have to be satisfied for each separately and for each a single, but generally different energy level for the surface state is obtained.
True surface states and surface resonances
A surface state is described by the energy and its wave vector parallel to the surface, while a bulk state is characterized by both and wave numbers. In the two-dimensional Brillouin zone of the surface, for each value of therefore a rod of is extending into the three-dimensional Brillouin zone of the Bulk. Bulk energy bands that are being cut by these rods allow states that penetrate deep into the crystal. One therefore generally distinguishes between true surface states and surface resonances. True surface states are characterized by energy bands that are not degenerate with bulk energy bands. These states exist in the forbidden energy gap only and are therefore localized at the surface, similar to the picture given in figure 3. At energies where a surface and a bulk state are degenerate, the surface and the bulk state can mix, forming a surface resonance. Such a state can propagate deep into the bulk, similar to Bloch waves, while retaining an enhanced amplitude close to the surface.
Tamm states
Surface states that are calculated in the framework of a tight-binding model are often called Tamm states. In the tight binding approach, the electronic wave functions are usually expressed as a linear combination of atomic orbitals (LCAO), see figure 5. In this picture, it is easy to comprehend that the existence of a surface will give rise to surface states with energies different from the energies of the bulk states: Since the atoms residing in the topmost surface layer are missing their bonding partners on one side, their orbitals have less overlap with the orbitals of neighboring atoms. The splitting and shifting of energy levels of the atoms forming the crystal is therefore smaller at the surface than in the bulk.
If a particular orbital is responsible for the chemical bonding, e.g. the sp3 hybrid in Si or Ge, it is strongly affected by the presence of the surface, bonds are broken, and the remaining lobes of the orbital stick out from the surface. They are called dangling bonds. The energy levels of such states are expected to significantly shift from the bulk values.
In contrast to the nearly free electron model used to describe the Shockley states, the Tamm states are suitable to describe also transition metals and wide-bandgap semiconductors.
Extrinsic surface states
Surface states originating from clean and well ordered surfaces are usually called intrinsic. These states include states originating from reconstructed surfaces, where the two-dimensional translational symmetry gives rise to the band structure in the k space of the surface.
Extrinsic surface states are usually defined as states not originating from a clean and well ordered surface. Surfaces that fit into the category extrinsic are:
Surfaces with defects, where the translational symmetry of the surface is broken.
Surfaces with adsorbates
Interfaces between two materials, such as a semiconductor-oxide or semiconductor-metal interface
Interfaces between solid and liquid phases.
Generally, extrinsic surface states cannot easily be characterized in terms of their chemical, physical or structural properties.
Experimental observation
Angle resolved photoemission spectroscopy
An experimental technique to measure the dispersion of surface states is angle resolved photoemission spectroscopy (ARPES) or angle resolved ultraviolet photoelectron spectroscopy (ARUPS).
Scanning tunneling microscopy
The surface state dispersion can be measured using a scanning tunneling microscope; in these experiments, periodic modulations in the surface state density, which arise from scattering off of surface impurities or step edges, are measured by an STM tip at a given bias voltage. The wavevector versus bias (energy) of the surface state electrons can be fit to a free-electron model with effective mass and surface state onset energy.
A recent new theory
A naturally simple but fundamental question is how many surface states are in a band gap in a one-dimensional crystal of length ( is the potential period, and is a positive integer)? A well-accepted concept proposed by Fowler first in 1933, then written in Seitz's classic book that "in a finite one-dimensional crystal the surface states occur in pairs, one state being associated with each end of the crystal." Such a concept seemly was never doubted since then for nearly a century, as shown, for example, in.
However, a recent new investigation
gives an entirely different answer.
The investigation tries to understand electronic states in ideal crystals of finite size based on the mathematical theory of periodic differential equations. This theory provides some fundamental new understandings of those electronic states, including surface states.
The theory found that a one-dimensional finite crystal with two ends at and
always has one and only one state whose energy and properties depend on but not for each band gap. This state is either a band-edge state or a surface state in the band gap(see, Particle in a one-dimensional lattice, Particle in a box).
Numerical calculations have confirmed such findings.
Further, these behaviors have been seen in different one-dimensional systems, such as in.
Therefore:
The fundamental property of a surface state is that its existence and properties depend on the location of the periodicity truncation.
Truncation of the lattice's periodic potential may or may not lead to a surface state in a band gap.
An ideal one-dimensional crystal of finite length with two ends can have, at most, only one surface state at one end in each band gap.
Further investigations extended to multi-dimensional cases found that
An ideal simple three-dimensional finite crystal may have vertex-like, edge-like, surface-like, and bulk-like states.
A surface state is always in a band gap is only valid for one-dimensional cases.
References
Materials science
Electronic band structures
Semiconductor structures | Surface states | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,184 | [
"Electron",
"Applied and interdisciplinary physics",
"Materials science",
"Electronic band structures",
"Condensed matter physics",
"nan"
] |
3,216,387 | https://en.wikipedia.org/wiki/Hausdorff%20moment%20problem | In mathematics, the Hausdorff moment problem, named after Felix Hausdorff, asks for necessary and sufficient conditions that a given sequence be the sequence of moments
of some Borel measure supported on the closed unit interval . In the case , this is equivalent to the existence of a random variable supported on , such that .
The essential difference between this and other well-known moment problems is that this is on a bounded interval, whereas in the Stieltjes moment problem one considers a half-line , and in the Hamburger moment problem one considers the whole line . The Stieltjes moment problems and the Hamburger moment problems, if they are solvable, may have infinitely many solutions (indeterminate moment problem) whereas a Hausdorff moment problem always has a unique solution if it is solvable (determinate moment problem). In the indeterminate moment problem case, there are infinite measures corresponding to the same prescribed moments and they consist of a convex set. The set of polynomials may or may not be dense in the associated Hilbert spaces if the moment problem is indeterminate, and it depends on whether measure is extremal or not. But in the determinate moment problem case, the set of polynomials is dense in the associated Hilbert space.
Completely monotonic sequences
In 1921, Hausdorff showed that is such a moment sequence if and only if the sequence is completely monotonic, that is, its difference sequences satisfy the equation
for all . Here, is the difference operator given by
The necessity of this condition is easily seen by the identity
which is non-negative since it is the integral of a non-negative function. For example, it is necessary to have
See also
Absolutely and completely monotonic functions and sequences
Total monotonicity
References
Hausdorff, F. "Summationsmethoden und Momentfolgen. I." Mathematische Zeitschrift 9, 74–109, 1921.
Hausdorff, F. "Summationsmethoden und Momentfolgen. II." Mathematische Zeitschrift 9, 280–299, 1921.
Feller, W. "An Introduction to Probability Theory and Its Applications", volume II, John Wiley & Sons, 1971.
Shohat, J.A.; Tamarkin, J. D. The Problem of Moments, American mathematical society, New York, 1943.
Probability problems
Moment (mathematics)
Mathematical problems | Hausdorff moment problem | [
"Physics",
"Mathematics"
] | 496 | [
"Mathematical analysis",
"Moments (mathematics)",
"Physical quantities",
"Probability problems",
"Mathematical problems",
"Moment (physics)"
] |
16,627,889 | https://en.wikipedia.org/wiki/Cadwork | cadwork is a software suite that includes IFC-based virtual design and construction software tools developed by cadwork informatik AG. This suite of tools provides a solution for 3D wood manufacturing (Computer-aided manufacturing, CAM) and a solution for Building Information Modeling that includes project planning and control functions of 3D quantity takeoff, 4D scheduling, 5D pricing, and 6D execution.
The primary application is in light frame woods and heavy timber construction: The Architectural design, structural engineering, and construction carpentry phases are supported with novel features for glue laminated timber and stairs
With a commitment to opensource ideals, cadwork viewers are free and thus providing an inclusive environment for collaboration across project stakeholders through model navigation features, such as zoom, pan, and print: the freeware version does not allow modifying a file. Three file types define the level of sophistication implemented through the model.
.2d, .2dc (2 dimensions drawing)
.3d, .3dc (3 dimensions drawings)
.2dv (parametric elements)
Following an international shift towards open-source software standards, cadwork is IFC compatible (Industry Foundation Classes) with several certifications, such as IFC 2x3 (Import ISO/PAS).
See also
Virtual design and construction
Industry Foundation Classes (IFC)
Open Design Alliance (OpenDWG)
3D ACIS Modeler (ACIS)
Construction management
Construction engineering
References
External links
Data modeling
Building information modeling
Construction
Civil engineering
Building engineering software
Construction management
Computer-aided design
Computer-aided design software
3D graphics software
Computer-aided design software for Windows | Cadwork | [
"Engineering"
] | 324 | [
"Building engineering software",
"Computer-aided design",
"Design engineering",
"Building engineering",
"Construction",
"Data modeling",
"Data engineering",
"Civil engineering",
"Building information modeling",
"Construction management"
] |
16,628,847 | https://en.wikipedia.org/wiki/SCO%20Skunkware | SCO Skunkware, often referred to as simply "Skunkware", is a collection of open-source software projects ported, compiled, and packaged for free redistribution on Santa Cruz Operation (SCO) operating environments. SCO Skunkware packaged components exist for SCO Xenix, SCO UNIX, OpenServer 5–6, UnixWare 2 and 7, Caldera OpenLinux, and Open UNIX 8. SCO Skunkware was an early pioneering effort to bring open source software into the realm of business computing and, as such, provided an important initial impetus to the acceptance and adoption of open source software in the small and medium-sized business market. An extensive SCO Skunkware download area has been maintained since 1993 and SCO Skunkware components were shipped with operating system distributions as far back as 1983, when Xenix for the IBM XT was released by The Santa Cruz Operation. The annual SCO Forum conference was a venue for the makers and users of SCO Skunkware to meet and discuss its contents and ideas for future additions.
Later additional open source distributions for operating platforms such as the FreeBSD Ports collection and the Solaris Freeware repository would lend added momentum to the adoption of open source in the business community.
Release history
SCO Skunkware has been released often on CD-ROM and as a downloadable CD ISO image. Individual packages are distributed via FTP. The Skunkware CD release history is:
1983 – First SCO Xenix Games Diskette
1993 – Skunkware (SCO UNIX 3.2)
1994 – Skunkware 2.0 (OpenDesktop)
1995 – Skunkware 5 (OpenServer 5)
1996 – Skunkware 96 (OpenServer 5)
1997 – Skunkware 97 (OSR5 + UW2)
1998 – Skunkware 7 (UnixWare 7)
1998 – Skunkware 98 (OpenServer 5)
1999 – Skunkware 7.1 (UnixWare 7)
1999 – Skunkware 99 (OpenServer 5 and UnixWare 7)
2000 – Skunkware 2000 (OpenServer 5)
2000 – Skunkware 7.1.1 (UnixWare 7)
2001 – Skunkware 8.0.0 (Open UNIX 8)
2001 – SOSS 3.1 (OpenLinux 3.1)
2002 – Skunkware 8.0.1 (Open UNIX 8)
2002 – SOSS 3.1.1 (OpenLinux 3.1.1)
2006 – Skunkware 2006 (OpenServer 6)
Licensing
SCO Skunkware components are licensed under a variety of terms. Most components are licensed under an Open Source Initiative (OSI) approved open-source license. Many are licensed under the terms of either the GNU General Public License or the GNU Library General Public License.
Licenses used by SCO Skunkware components include or are similar to:
GNU General Public License
GNU Library General Public License
Artistic License
Mozilla Public License
Netscape Public License
The Open Group Public License
The AST Open Source License
X Consortium License
Berkeley Based Licenses
A few of the components are "freeware" with no restrictions on their redistribution. Some components may restrict their use to non-commercial purposes or require a license fee for commercial use (e.g. MBROLA). Some components may be redistributed with special permission from the author(s) as is the case with KISDN.
Packaging formats
SCO Skunkware packages are typically distributed in the native packaging format of the operating system release for which they are intended. Package management systems used by SCO Skunkware include the following:
Old SCO Custom installable floppy images (SCO Xenix & UNIX 3.2v4)
New Custom SSO architecture media images (SCO OpenServer 5 and 6)
SysV pkgadd datastreams (UnixWare 2, UnixWare 7, Open UNIX 8)
RPM (OpenLinux 3, UnixWare 7, OpenServer 5 & 6)
Compressed tar and cpio archives (all platforms)
See also
Open-source software
List of free and open-source software packages
The Cathedral and the Bazaar
Notes
References
SCO Skunkware website
SCO Skunkware SCO Forum 1998
Open Source and SCO SCO Forum 2000
Open Source BOF SCO Forum 2002
Open Source Components in SCO OpenServer and SCO UnixWare SCO Forum 2004
Open Source and SCO SCO Forum 2005
Open Source at SCO SCO Forum 2006
SCO, Skunkware, and the Open Source Movement SCO World magazine February 1, 1999
Porting Open Source Software to SCO SCO World magazine November 1, 1999
About SCO World magazine at archive.org
External links
SCO Skunkware FTP download area
Computing platforms
Free software distributions
Unix software | SCO Skunkware | [
"Technology"
] | 997 | [
"Computing platforms"
] |
16,637,535 | https://en.wikipedia.org/wiki/Size%20function | Size functions are shape descriptors, in a geometrical/topological sense. They are functions from the half-plane to the natural numbers, counting certain connected components of a topological space. They are used in pattern recognition and topology.
Formal definition
In size theory, the size function associated with the size pair is defined in the following way. For every , is equal to the number of connected components of the set
that contain at least one point at which the measuring function (a continuous function from a topological space to
) takes a value smaller than or equal to
.
The concept of size function can be easily extended to the case of a measuring function , where is endowed with the usual partial order
.
A survey about size functions (and size theory) can be found in.
History and applications
Size functions were introduced in
for the particular case of equal to the topological space of all piecewise closed paths in a closed manifold embedded in a Euclidean space. Here the topology on is induced by the
-norm, while the measuring function takes each path to its length.
In
the case of equal to the topological space of all ordered -tuples of points in a submanifold of a Euclidean space is considered.
Here the topology on is induced by the metric .
An extension of the concept of size function to algebraic topology was made in
where the concept of size homotopy group was introduced. Here measuring functions taking values in are allowed.
An extension to homology theory (the size functor) was introduced in
.
The concepts of size homotopy group and size functor are strictly related to the concept of persistent homology group
studied in persistent homology. It is worth to point out that the size function is the rank of the -th persistent homology group, while the relation between the persistent homology group
and the size homotopy group is analogous to the one existing between homology groups and homotopy groups.
Size functions have been initially introduced as a mathematical tool for shape comparison in computer vision and pattern recognition, and have constituted the seed of size theory.
The main point is that size functions are invariant for every transformation preserving the measuring function. Hence, they can be adapted to many different applications, by simply changing the measuring function in order to get the wanted invariance. Moreover, size functions show properties of relative resistance to noise, depending on the fact that they distribute the information all over the half-plane .
Main properties
Assume that is a compact locally connected Hausdorff space. The following statements hold:
every size function is a non-decreasing function in the variable and a non-increasing function in the variable .
every size function is locally right-constant in both its variables.
for every , is finite.
for every and every , .
for every and every , equals the number of connected components of on which the minimum value of is smaller than or equal to .
If we also assume that is a smooth closed manifold and is a -function, the following useful property holds:
in order that is a discontinuity point for it is necessary that either or or both are critical values for .
A strong link between the concept of size function and the concept of natural pseudodistance
between the size pairs exists.
if then .
The previous result gives an easy way to get lower bounds for the natural pseudodistance and is one of the main motivation to introduce the concept of size function.
Representation by formal series
An algebraic representation of size
functions in terms of collections of points and lines in the real plane with
multiplicities, i.e. as particular formal series, was furnished in
.
The points (called cornerpoints) and lines (called cornerlines) of such formal series encode the information about
discontinuities of the corresponding size functions, while
their multiplicities contain the information about the values taken by the
size function.
Formally:
cornerpoints are defined as those points , with , such that the number
is positive. The number is said to be the multiplicity of .
cornerlines and are defined as those lines such that
The number is sad to be the multiplicity of .
Representation Theorem: For every , it holds
.
This representation contains the
same amount of information about the shape under study as the original
size function does, but is much more concise.
This algebraic approach to size functions leads to the definition of new similarity measures
between shapes, by translating the problem of comparing size functions into
the problem of comparing formal series. The most studied among these metrics between size function is the matching distance.
References
See also
Size theory
Natural pseudodistance
Size functor
Size homotopy group
Size pair
Matching distance
Topological data analysis
Topology
Algebraic topology | Size function | [
"Physics",
"Mathematics"
] | 934 | [
"Algebraic topology",
"Fields of abstract algebra",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
11,282,061 | https://en.wikipedia.org/wiki/Uranium-236 | Uranium-236 ( or U-236) is an isotope of uranium that is neither fissile with thermal neutrons, nor very good fertile material, but is generally considered a nuisance and long-lived radioactive waste. It is found in spent nuclear fuel and in the reprocessed uranium made from spent nuclear fuel.
Creation and yield
The fissile isotope uranium-235 fuels most nuclear reactors. When 235U absorbs a thermal neutron, one of two processes can occur. About 85.5% of the time, it will fission; about 14.5% of the time, it will not fission, instead emitting gamma radiation and yielding 236U. Thus, the yield of 236U per 235U+n reaction is about 14.5%, and the yield of fission products is about 85.5%. In comparison, the yields of the most abundant individual fission products like caesium-137, strontium-90, and technetium-99 are between 6% and 7%, and the combined yield of medium-lived (10 years and up) and long-lived fission products is about 32%, or a few percent less as some are transmutated by neutron capture. Caesium-135 is the most notable "absent fission product", as it is found far more in nuclear fallout than in spent nuclear fuel since its parent nuclide xenon-135 is the strongest known neutron poison.
The second-most used fissile isotope plutonium-239 can also fission or not fission on absorbing a thermal neutron. The product plutonium-240 makes up a large proportion of reactor-grade plutonium (plutonium recycled from spent fuel that was originally made with enriched natural uranium and then used once in an LWR). 240Pu decays with a half-life of 6561 years into 236U. In a closed nuclear fuel cycle, most 240Pu will be fissioned (possibly after more than one neutron capture) before it decays, but 240Pu discarded as nuclear waste will decay over thousands of years. As has a shorter half life than , the grade of any sample of plutonium mostly composed of those two isotopes will slowly increase, while the total amount of plutonium in the sample will slowly decrease over centuries and millennia. Alpha decay of produces uranium-236, while decays to uranium-235.
While the largest part of uranium-236 has been produced by neutron capture in nuclear power reactors, it is for the most part stored in nuclear reactors and waste repositories. The most significant contribution to uranium-236 abundance in the environment is the 238U(n,3n)236U reaction by fast neutrons in thermonuclear weapons. The A-bomb testing of the 1940s, 1950s, and 1960s has raised the environmental abundance levels significantly above the expected natural levels.
Destruction and decay
236U, on absorption of a thermal neutron, does not undergo fission, but becomes 237U, which quickly undergoes beta decay to 237Np. However, the neutron capture cross section of 236U is low, and this process does not happen quickly in a thermal reactor. Spent nuclear fuel typically contains about 0.4% 236U. With a much greater cross-section, 237Np may eventually absorb another neutron and become 238Np, which quickly beta decays to plutonium-238 (another non-fissile isotope).
236U and most other actinide isotopes are fissionable by fast neutrons in a nuclear bomb or a fast neutron reactor. A small number of fast reactors have been in research use for decades, but widespread use for power production is still in the future.
Uranium-236 alpha decays with a half-life of 23.420 million years to thorium-232. It is longer-lived than any other artificial actinides or fission products produced in the nuclear fuel cycle. (Plutonium-244, which has a half-life of 80 million years, is not produced in significant quantity by the nuclear fuel cycle, and the longer-lived uranium-235, uranium-238, and thorium-232 occur in nature.)
Difficulty of separation
Unlike plutonium, minor actinides, fission products, or activation products, chemical processes cannot separate 236U from 238U, 235U, 232U or other uranium isotopes. It is even difficult to remove with isotopic separation, as low enrichment will concentrate not only the desirable 235U and 233U but the undesirable 236U, 234U and 232U. On the other hand, 236U in the environment cannot separate from 238U and concentrate separately, which limits its radiation hazard in any one place.
Contribution to radioactivity of reprocessed uranium
The half-life of 238U is about 190 times as long as that of 236U; therefore, 236U should have about 190 times as much specific activity. That is, in reprocessed uranium with 0.5% 236U, the 236U and 238U will produce about the same level of radioactivity. (235U contributes only a few percent.)
The ratio is less than 190 when the decay products of each are included. The decay chain of uranium-238 to uranium-234 and eventually lead-206 involves emission of eight alpha particles in a time (hundreds of thousands of years) short compared to the half-life of 238U, so that a sample of 238U in equilibrium with its decay products (as in natural uranium ore) will have eight times the alpha activity of 238U alone. Even purified natural uranium where the post-uranium decay products have been removed will contain an equilibrium quantity of 234U and therefore about twice the alpha activity of pure 238U. Enrichment to increase 235U content will increase 234U to an even greater degree, and roughly half of this 234U will survive in the spent fuel. On the other hand, 236U decays to thorium-232 which has a half-life of 14 billion years, equivalent to a decay rate only 31.4% as great as that of 238U.
Depleted uranium
Depleted uranium used in kinetic energy penetrators, etc. is supposed to be made from uranium enrichment tailings that have never been irradiated in a nuclear reactor, not reprocessed uranium. However, there have been claims that some depleted uranium has contained small amounts of 236U.
See also
Depleted uranium
Uranium market
Nuclear reprocessing
United States Enrichment Corporation
Nuclear fuel cycle
Nuclear power
References
External links
Uranium | Radiation Protection Program | US EPA
NLM Hazardous Substances Databank - Uranium, Radioactive
Actinides
Isotopes of uranium
Nuclear materials | Uranium-236 | [
"Physics",
"Chemistry"
] | 1,352 | [
"Isotopes",
"Materials",
"Nuclear materials",
"Isotopes of uranium",
"Matter"
] |
11,284,852 | https://en.wikipedia.org/wiki/Combined%20rapid%20anterior%20pituitary%20evaluation%20panel |
A combined rapid anterior pituitary evaluation panel or triple bolus test or a dynamic pituitary function test is a medical diagnostic procedure used to assess a patient's pituitary function.
A triple bolus test is usually ordered and interpreted by endocrinologists.
In rare cases, it has been associated with pituitary apoplexy.
Process
Three hormones (usually synthetic analogues) are injected as a bolus into the patient's vein to stimulate the anterior pituitary gland:
insulin
gonadotropin-releasing hormone (GnRH)
thyrotropin-releasing hormone (TRH)
The gland's response is assessed by measuring the rise in cortisol and growth hormone (GH) in response to the hypoglycaemia caused by insulin, rises in prolactin and thyroid-stimulating hormone (TSH) caused by TRH and rises in luteinizing hormone (LH) and follicle-stimulating hormone (FSH) caused by GnRH. Blood glucose levels are also monitored to ensure appropriate levels of hypoglycemia are achieved.
History
The triple bolus test was introduced in 1973 by physicians from the London Royal Postgraduate Medical School and Queen Elizabeth Hospital, Birmingham. It followed earlier reports combining insulin and vasopressin analogues in the diagnosis of hypopituitarism.
See also
Insulin tolerance test
ACTH stimulation test
Hypopituitarism
Triple test
References
Blood tests
Dynamic endocrine function tests | Combined rapid anterior pituitary evaluation panel | [
"Chemistry"
] | 309 | [
"Blood tests",
"Chemical pathology"
] |
11,288,646 | https://en.wikipedia.org/wiki/Purine%20metabolism | Purine metabolism refers to the metabolic pathways to synthesize and break down purines that are present in many organisms.
Biosynthesis
Purines are biologically synthesized as nucleotides and in particular as ribotides, i.e. bases attached to ribose 5-phosphate. Both adenine and guanine are derived from the nucleotide inosine monophosphate (IMP), which is the first compound in the pathway to have a completely formed purine ring system.
IMP
Inosine monophosphate is synthesized on a pre-existing ribose-phosphate through a complex pathway (as shown in the figure on the right). The source of the carbon and nitrogen atoms of the purine ring, 5 and 4 respectively, come from multiple sources. The amino acid glycine contributes all its carbon (2) and nitrogen (1) atoms, with additional nitrogen atoms from glutamine (2) and aspartic acid (1), and additional carbon atoms from formyl groups (2), which are transferred from the coenzyme tetrahydrofolate as 10-formyltetrahydrofolate, and a carbon atom from bicarbonate (1). Formyl groups build carbon-2 and carbon-8 in the purine ring system, which are the ones acting as bridges between two nitrogen atoms.
A key regulatory step is the production of 5-phospho-α-D-ribosyl 1-pyrophosphate (PRPP) by ribose-phosphate diphosphokinase, which is activated by inorganic phosphate and inactivated by purine ribonucleotides. It is not the committed step to purine synthesis because PRPP is also used in pyrimidine synthesis and salvage pathways.
The first committed step is the reaction of PRPP, glutamine and water to 5'-phosphoribosylamine (PRA), glutamate, and pyrophosphate - catalyzed by amidophosphoribosyltransferase, which is activated by PRPP and inhibited by AMP, GMP and IMP.
PRPP + L-Glutamine + → PRA + L-Glutamate + PPi
In the second step react PRA, glycine and ATP to create GAR, ADP, and pyrophosphate - catalyzed by phosphoribosylamine—glycine ligase (GAR synthetase). Due to the chemical lability of PRA, which has a half-life of 38 seconds at PH 7.5 and 37 °C, researchers have suggested that the compound is channeled from amidophosphoribosyltransferase to GAR synthetase in vivo.
PRA + Glycine + ATP → GAR + ADP + Pi
The third is catalyzed by phosphoribosylglycinamide formyltransferase.
GAR + fTHF → fGAR + THF
The fourth is catalyzed by phosphoribosylformylglycinamidine synthase.
fGAR + L-Glutamine + ATP → fGAM + L-Glutamate + ADP + Pi
The fifth is catalyzed by AIR synthetase (FGAM cyclase).
fGAM + ATP → AIR + ADP + Pi +
The sixth is catalyzed by phosphoribosylaminoimidazole carboxylase.
AIR + → CAIR +
The seventh is catalyzed by phosphoribosylaminoimidazolesuccinocarboxamide synthase.
CAIR + L-Aspartate + ATP → SAICAR + ADP + Pi
The eight is catalyzed by adenylosuccinate lyase.
SAICAR → AICAR + Fumarate
The products AICAR and fumarate move on to two different pathways. AICAR serves as the reactant for the ninth step, while fumarate is transported to the citric acid cycle which can then skip the carbon dioxide evolution steps to produce malate. The conversion of fumarate to malate is catalyzed by fumarase. In this way, fumarate connects purine synthesis to the citric acid cycle.
The ninth is catalyzed by phosphoribosylaminoimidazolecarboxamide formyltransferase.
AICAR + fTHF → FAICAR + THF
The last step is catalyzed by Inosine monophosphate synthase.
FAICAR → IMP +
In eukaryotes the second, third, and fifth step are catalyzed by trifunctional purine biosynthetic protein adenosine-3, which is encoded by the GART gene.
Both ninth and tenth step are accomplished by a single protein named Bifunctional purine biosynthesis protein PURH, encoded by the ATIC gene.
GMP
IMP dehydrogenase (IMPDH) converts IMP into XMP
GMP synthase converts XMP into GMP
GMP reductase converts GMP back into IMP
AMP
adenylosuccinate synthase converts IMP to adenylosuccinate
adenylosuccinate lyase converts adenylosuccinate into AMP
AMP deaminase converts AMP back into IMP
Degradation
Purines are metabolised by several enzymes:
Guanine
A nuclease frees the nucleotide
A nucleotidase creates guanosine
Purine nucleoside phosphorylase converts guanosine to guanine
Guanase converts guanine to xanthine
Xanthine oxidase (a form of xanthine oxidoreductase) catalyzes the oxidation of xanthine to uric acid
Adenine
A nuclease frees the nucleotide
A nucleotidase creates adenosine, then adenosine deaminase creates inosine
Alternatively, AMP deaminase creates inosinic acid, then a nucleotidase creates inosine
Purine nucleoside phosphorylase acts upon inosine to create hypoxanthine
Xanthine oxidase catalyzes the biotransformation of hypoxanthine to xanthine
Xanthine oxidase acts upon xanthine to create uric acid
Regulations of purine nucleotide biosynthesis
The formation of 5'-phosphoribosylamine from glutamine and PRPP catalysed by PRPP amino transferase is the regulation point for purine synthesis. The enzyme is an allosteric enzyme, so it can be converted from IMP, GMP and AMP in high concentration binds the enzyme to exerts inhibition while PRPP is in large amount binds to the enzyme which causes activation. So IMP, GMP and AMP are inhibitors while PRPP is an activator. Between the formation of 5'-phosphoribosyl, aminoimidazole and IMP, there is no known regulation step.
Salvage
Purines from turnover of cellular nucleic acids (or from food) can also be salvaged and reused in new nucleotides.
The enzyme adenine phosphoribosyltransferase (APRT) salvages adenine.
The enzyme hypoxanthine-guanine phosphoribosyltransferase (HGPRT) salvages guanine and hypoxanthine. (Genetic deficiency of HGPRT causes Lesch–Nyhan syndrome.)
Disorders
When a defective gene causes gaps to appear in the metabolic recycling process for purines and pyrimidines, these chemicals are not metabolised properly, and adults or children can suffer from any one of twenty-eight hereditary disorders, possibly some more as yet unknown. Symptoms can include gout, anaemia, epilepsy, delayed development, deafness, compulsive self-biting, kidney failure or stones, or loss of immunity.
Purine metabolism can have imbalances that can arise from harmful nucleotide triphosphates incorporating into DNA and RNA which further lead to genetic disturbances and mutations, and as a result, give rise to several types of diseases. Some of the diseases are:
Severe immunodeficiency by loss of adenosine deaminase.
Hyperuricemia and Lesch–Nyhan syndrome by the loss of hypoxanthine-guanine phosphoribosyltransferase.
Different types of cancer by an increase in the activities of enzymes like IMP dehydrogenase.
Pharmacotherapy
Modulation of purine metabolism has pharmacotherapeutic value.
Purine synthesis inhibitors inhibit the proliferation of cells, especially leukocytes. These inhibitors include azathioprine, an immunosuppressant used in organ transplantation, autoimmune disease such as rheumatoid arthritis or inflammatory bowel disease such as Crohn's disease and ulcerative colitis.
Mycophenolate mofetil is an immunosuppressant drug used to prevent rejection in organ transplantation; it inhibits purine synthesis by blocking inosine monophosphate dehydrogenase (IMPDH).
Methotrexate also indirectly inhibits purine synthesis by blocking the metabolism of folic acid (it is an inhibitor of the dihydrofolate reductase).
Allopurinol is a drug that inhibits the enzyme xanthine oxidoreductase and, thus, lowers the level of uric acid in the body. This may be useful in the treatment of gout, which is a disease caused by excess uric acid, forming crystals in joints.
Prebiotic synthesis of purine ribonucleosides
In order to understand how life arose, knowledge is required of the chemical pathways that permit formation of the key building blocks of life under plausible prebiotic conditions. Nam et al. demonstrated the direct condensation of purine and pyrimidine nucleobases with ribose to give ribonucleosides in aqueous microdroplets, a key step leading to RNA formation. Also, a plausible prebiotic process for synthesizing purine ribonucleosides was presented by Becker et al.
Purine biosynthesis in the three domains of life
Organisms in all three domains of life, eukaryotes, bacteria and archaea, are able to carry out de novo biosynthesis of purines. This ability reflects the essentiality of purines for life. The biochemical pathway of synthesis is very similar in eukaryotes and bacterial species, but is more variable among archaeal species. A nearly complete, or complete, set of genes required for purine biosynthesis was determined to be present in 58 of the 65 archaeal species studied. However, also identified were seven archaeal species with entirely, or nearly entirely, absent purine encoding genes. Apparently the archaeal species unable to synthesize purines are able to acquire exogenous purines for growth., and are thus similar to purine mutants of eukaryotes, e.g. purine mutants of the Ascomycete fungus Neurospora crassa, that also require exogenous purines for growth.
See also
Purine nucleotide cycle
Purinergic signaling
Disease-modifying antirheumatic drug (DMARD)
References
External links
The Medical Biochemistry Page
Purine metabolism - Reference pathway
PUMPA: Purine Metabolic Patients’ Association
Metabolic pathways
Biochemistry | Purine metabolism | [
"Chemistry",
"Biology"
] | 2,444 | [
"Metabolic pathways",
"Biochemistry",
"Metabolism",
"nan"
] |
11,290,659 | https://en.wikipedia.org/wiki/Leadzyme | Leadzyme is a small ribozyme (catalytic RNA), which catalyzes the cleavage of a specific phosphodiester bond. It was discovered using an in-vitro evolution study where the researchers were selecting for RNAs that specifically cleaved themselves in the presence of lead. However, since then, it has been discovered in several natural systems. Leadzyme was found to be efficient and dynamic in the presence of micromolar concentrations of lead ions. Unlike in other small self-cleaving ribozymes, other divalent metal ions cannot replace Pb2+ in the leadzyme. Due to obligatory requirement for a lead, the ribozyme is called a metalloribozyme.
Leadzyme has been subjected to extensive biochemical and structural characterization. The minimal secondary structure of leadzyme is surprisingly very simple . It comprises an asymmetric internal loop composed of six nucleotides and a helical region on each side of the internal loop. The cleavage site of leadzyme is located within a four-nucleotide long asymmetric internal loop that also consists of RNA helices on its both sides. This is shown in top figure on right, which is the secondary structure of leadzyme generated using mfold. The structures of leadzyme have also been solved using X-ray crystallography and NMR. The crystal structures of the two conformations of leadzyme are shown in the lower figure on right.
Catalytic mechanism of leadzyme
Leadzyme is thought to perform catalysis using a two-step mechanism. In the first step of the reaction, the phosphodiester bond is cleaved into two products: 5’ product terminating in 2’3’ cyclic phosphate and the 3’ product in 5’ hydroxyl. This step is similar to other small self-cleaving ribozymes such as the Hammerhead ribozyme and HDV ribozyme. Both of those ribozymes generate a product, which contain a 2’, 3’ -cyclic phosphate. However, in leadzyme this product is just an intermediate. In the second step of this reaction pathway, the 2’ 3’ -cyclic phosphate undergoes hydrolysis to form 3’ monophosphate. This mode of catalysis is similar to how ribonucleases (proteins) function rather than any known small self-cleaving ribozyme.
The leadzyme is thought to have a highly dynamic structure. Many studies including NMR, X-ray crystallography and molecular modeling have revealed slightly different structures. Recently using time-resolved spectroscopy, it was shown that the active site of leadzyme is very dynamic. It samples a lot of different conformations in solution and that the delta G of the interconversion between different conformations is very low. Consistent with these studies, a high-resolution crystal structure also revealed two distinct conformations of the leadzyme with different binding sites for Mg2+ and Sr2+ (Pb2+ substitutes) in the two conformations. In the ground state, leadzyme binds a single Sr2+ ion at nucleotides G43, G45 and A45. This binding site is away from the scissile bond (cleavage site) and thus does not explain the involvement of the Pb2+ in the catalysis. However, in the second conformation, termed the ‘pre-catalytic’ state, the ribozyme shows two Sr2+ binding sites. G43 and G42 interact with one Sr2+ whereas the second Sr2+ interacts with the A45, C23 and G24. This second Sr2+ binding site also potentially interacts with the 2’-OH of the C23 via a water molecule. This second binding site explains how Pb2+ could facilitate catalysis by abstracting the 2-OH proton and prepare it for an in-line nucleophillic attack on the scissile phosphate. This is also supported by the fact the reaction of the leadzyme is pH dependent. Thus, Pb2+ could be acting as a Lewis acid and activating the 2-OH of C23. The crystal structure is consistent with a two-metal ion mechanism that has been proposed for leadzyme catalysis.
Lead toxicity through leadzyme
Toxic metals like lead are environmental and health hazards and can enter biological systems upon exposure. Lead is a persistent metal and can accumulate in human body over time due to its frequent usage in industries and presence in our environment. Inhalation of lead can have effects that can be range from subtle symptoms to serious illnesses. It is possible that presence of lead in our biological systems can induce catalysis by lead ions. Since leadzyme is a relatively simple motif i.e., it has a simple fold, it appears that there are many sequences in the genomes of many natural systems which can potentially fold into a leadzyme structure. A simple search for this RNA motif in the genomes of humans, Drosophila melanogaster, Caenorhabditis elegans and Arabidopsis thaliana revealed that on average this motif is present with the frequency of 2-9 motifs for 1 Mbp of DNA sequence. They also showed that leadzyme motif is very common in the mRNA sequences of these organisms as well. Thus, these sequences could potentially self-cleave in the presence of lead ions. The targeting of these RNA motifs by lead in mRNAs and other RNAs may explain lead-mediated toxicity resulting in cell death.
References
Further reading
Ribozymes | Leadzyme | [
"Chemistry"
] | 1,160 | [
"Catalysis",
"Ribozymes"
] |
11,291,250 | https://en.wikipedia.org/wiki/Monotonically%20normal%20space | In mathematics, specifically in the field of topology, a monotonically normal space is a particular kind of normal space, defined in terms of a monotone normality operator. It satisfies some interesting properties; for example metric spaces and linearly ordered spaces are monotonically normal, and every monotonically normal space is hereditarily normal.
Definition
A topological space is called monotonically normal if it satisfies any of the following equivalent definitions:
Definition 1
The space is T1 and there is a function that assigns to each ordered pair of disjoint closed sets in an open set such that:
(i) ;
(ii) whenever and .
Condition (i) says is a normal space, as witnessed by the function .
Condition (ii) says that varies in a monotone fashion, hence the terminology monotonically normal.
The operator is called a monotone normality operator.
One can always choose to satisfy the property
,
by replacing each by .
Definition 2
The space is T1 and there is a function that assigns to each ordered pair of separated sets in (that is, such that ) an open set satisfying the same conditions (i) and (ii) of Definition 1.
Definition 3
The space is T1 and there is a function that assigns to each pair with open in and an open set such that:
(i) ;
(ii) if , then or .
Such a function automatically satisfies
.
(Reason: Suppose . Since is T1, there is an open neighborhood of such that . By condition (ii), , that is, is a neighborhood of disjoint from . So .)
Definition 4
Let be a base for the topology of .
The space is T1 and there is a function that assigns to each pair with and an open set satisfying the same conditions (i) and (ii) of Definition 3.
Definition 5
The space is T1 and there is a function that assigns to each pair with open in and an open set such that:
(i) ;
(ii) if and are open and , then ;
(iii) if and are distinct points, then .
Such a function automatically satisfies all conditions of Definition 3.
Examples
Every metrizable space is monotonically normal.
Every linearly ordered topological space (LOTS) is monotonically normal. This is assuming the Axiom of Choice, as without it there are examples of LOTS that are not even normal.
The Sorgenfrey line is monotonically normal. This follows from Definition 4 by taking as a base for the topology all intervals of the form and for by letting . Alternatively, the Sorgenfrey line is monotonically normal because it can be embedded as a subspace of a LOTS, namely the double arrow space.
Any generalised metric is monotonically normal.
Properties
Monotone normality is a hereditary property: Every subspace of a monotonically normal space is monotonically normal.
Every monotonically normal space is completely normal Hausdorff (or T5).
Every monotonically normal space is hereditarily collectionwise normal.
The image of a monotonically normal space under a continuous closed map is monotonically normal.
A compact Hausdorff space is the continuous image of a compact linearly ordered space if and only if is monotonically normal.
References
Properties of topological spaces | Monotonically normal space | [
"Mathematics"
] | 684 | [
"Properties of topological spaces",
"Topological spaces",
"Topology",
"Space (mathematics)"
] |
4,366,478 | https://en.wikipedia.org/wiki/Nanoparticle%20tracking%20analysis | Nanoparticle tracking analysis (NTA) is a method for visualizing and analyzing particles in liquids that relates the rate of Brownian motion to particle size. The rate of movement is related only to the viscosity and temperature of the liquid; it is not influenced by particle density or refractive index. NTA allows the determination of a size distribution profile of small particles with a diameter of approximately in liquid suspension.
The technique is used in conjunction with an ultramicroscope and a laser illumination unit that together allow small particles in liquid suspension to be visualized moving under Brownian motion. The light scattered by the particles is captured using a CCD or EMCCD camera over multiple frames. Computer software is then used to track the motion of each particle from frame to frame. The rate of particle movement is related to a sphere equivalent hydrodynamic radius as calculated through the Stokes–Einstein equation. The technique calculates particle size on a particle-by particle basis, overcoming inherent weaknesses in ensemble techniques such as dynamic light scattering. Since video clips form the basis of the analysis, accurate characterization of real time events such as aggregation and dissolution is possible. Samples require minimal preparation, minimizing the time required to process each sample. Speculators suggest that eventually the analysis may be done in real-time with no preparation, e.g. when detecting the presence of airborne viruses or biological weapons.
NTA currently operates for particles from about in diameter, depending on particle type. Analysis of particles at the lowest end of this range is possible only for particles composed of materials with a high refractive index, such gold and silver. The upper size limit is restricted by the limited Brownian motion of large particles; because a large particle moves very slowly, accuracy is diminished. The viscosity of the solvent also influences the movement of particles, and it, too, plays a part in determining the upper size limit for a specific system.
Applications
NTA has been used by commercial, academic, and government laboratories working with nanoparticle toxicology, drug delivery, exosomes, microvesicles, bacterial membrane vesicles, and other small biological particles, virology and vaccine production, ecotoxicology, protein aggregation, orthopedic implants, inks and pigments, and nanobubbles.
iNTA
Interferometric nanoparticle tracking analysis (iNTA) is the next generation of NTA technology. It is based on interferometric scattering microscopy (iSCAT), which enhances the signal of weak scatterers. In contrast to NTA, iNTA has a superior resolution based on a two-parameter analysis, including the size and the scattering cross-section of the particle.
Comparison to dynamic light scattering
Both dynamic light scattering (DLS) and nanoparticle tracking analysis (NTA) measure the Brownian motion of nanoparticles whose speed of motion, or diffusion constant, Dt, is related to particle size through the Stokes–Einstein equation.
where
Dt is the diffusion constant, a product of diffusion coefficient D and time t
kB is the Boltzmann constant,
T is the absolute temperature,
η is viscosity
d is the diameter of the spherical particle.
In NTA this motion is analyzed by video – individual particle positional changes are tracked in two dimensions from which the particle diffusion is determined. Knowing Dt, the particle hydrodynamic diameter can be then determined.
In contrast, DLS does not visualize the particles individually but analyzes, using a digital correlator, the time dependent scattering intensity fluctuations. These fluctuations are caused by interference effects arising from the relative Brownian movements of an ensemble of a large number of particles within a sample. Through analysis of the resultant exponential autocorrelation function, average particle size can be calculated as well as a polydispersity index. For multi-exponential autocorrelation functions arising from polydisperse samples, deconvolution can give limited information about the particle size distribution profile.
History
NTA and related technologies were developed by Bob Carr. Along with John Knowles, Carr founded NanoSight Ltd in 2003. This United Kingdom-based company, of which Knowles is the chairman and Carr is the chief technology officer, manufactures instruments that use NTA to detect and analyze small particles in industrial and academic laboratories. In 2004 Particle Metrix GmbH was founded in Germany by Hanno Wachernig. Particle Metrix makes the ZetaView, which operates on the same NTA principle but uses different optics and fluidics in an attempt to improve sampling, zeta potential, and fluorescence detection.
See also
Dynamic light scattering
NanoSight Ltd
References
Sub-micron microscopy
Nanoparticles | Nanoparticle tracking analysis | [
"Chemistry"
] | 954 | [
"Sub-micron microscopy",
"Microscopy"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.