id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
1,858,534 | https://en.wikipedia.org/wiki/Vascular%20endothelial%20growth%20factor | Vascular endothelial growth factor (VEGF, ), originally known as vascular permeability factor (VPF), is a signal protein produced by many cells that stimulates the formation of blood vessels. To be specific, VEGF is a sub-family of growth factors, the platelet-derived growth factor family of cystine-knot growth factors. They are important signaling proteins involved in both vasculogenesis (the de novo formation of the embryonic circulatory system) and angiogenesis (the growth of blood vessels from pre-existing vasculature).
It is part of the system that restores the oxygen supply to tissues when blood circulation is inadequate such as in hypoxic conditions. Serum concentration of VEGF is high in bronchial asthma and diabetes mellitus.
VEGF's normal function is to create new blood vessels during embryonic development, new blood vessels after injury, muscle following exercise, and new vessels (collateral circulation) to bypass blocked vessels.
It can contribute to disease. Solid cancers cannot grow beyond a limited size without an adequate blood supply; cancers that can express VEGF are able to grow and metastasize. Overexpression of VEGF can cause vascular disease in the retina of the eye and other parts of the body. Drugs such as aflibercept, bevacizumab, ranibizumab, and pegaptanib can inhibit VEGF and control or slow those diseases.
History
In 1970, Judah Folkman et al. described a factor secreted by tumors causing angiogenesis and called it tumor angiogenesis factor. In 1983 Senger et al. identified a vascular permeability factor secreted by tumors in guinea pigs and hamsters. In 1989 Ferrara and Henzel described an identical factor in bovine pituitary follicular cells which they purified, cloned and named VEGF. A similar VEGF alternative splicing was discovered by Tischer et al. in 1991. Between 1996 and 1997, Christinger and De Vos obtained the crystal structure of VEGF, first at 2.5 Å resolution and later at 1.9 Å.
Fms-like tyrosine kinase-1 (flt-1) was shown to be a VEGF receptor by Ferrara et al. in 1992. The kinase insert domain receptor (KDR) was shown to be a VEGF receptor by Terman et al. in 1992 as well. In 1998, neuropilin 1 and neuropilin 2 were shown to act as VEGF receptors.
Classification
In mammals, the VEGF family comprises five members: VEGF-A, placenta growth factor (PGF), VEGF-B, VEGF-C and VEGF-D. The latter members were discovered after VEGF-A; before their discovery, VEGF-A was known as VEGF. A number of VEGF-related proteins encoded by viruses (VEGF-E) and in the venom of some snakes (VEGF-F) have also been discovered.
Activity of VEGF-A, as its name implies, has been studied mostly on cells of the vascular endothelium, although it does have effects on a number of other cell types (e.g., stimulation monocyte/macrophage migration, neurons, cancer cells, kidney epithelial cells). In vitro, VEGF-A has been shown to stimulate endothelial cell mitogenesis and cell migration. VEGF-A is also a vasodilator and increases microvascular permeability and was originally referred to as vascular permeability factor.
Isoforms
There are multiple isoforms of VEGF-A that result from alternative splicing of mRNA from a single, 8-exon VEGFA gene. These are classified into two groups which are referred to according to their terminal exon (exon 8) splice site: the proximal splice site (denoted VEGFxxx) or distal splice site (VEGFxxxb). In addition, alternate splicing of exon 6 and 7 alters their heparin-binding affinity and amino acid number (in humans: VEGF121, VEGF121b, VEGF145, VEGF165, VEGF165b, VEGF189, VEGF206; the rodent orthologs of these proteins contain one fewer amino acids). These domains have important functional consequences for the VEGF splice variants, as the terminal (exon 8) splice site determines whether the proteins are pro-angiogenic (proximal splice site, expressed during angiogenesis) or anti-angiogenic (distal splice site, expressed in normal tissues). In addition, inclusion or exclusion of exons 6 and 7 mediate interactions with heparan sulfate proteoglycans (HSPGs) and neuropilin co-receptors on the cell surface, enhancing their ability to bind and activate the VEGF receptors (VEGFRs). Recently, VEGF-C has been shown to be an important inducer of neurogenesis in the murine subventricular zone, without exerting angiogenic effects.
Mechanism
All members of the VEGF family stimulate cellular responses by binding to tyrosine kinase receptors (the VEGFRs) on the cell surface, causing them to dimerize and become activated through transphosphorylation, although to different sites, times, and extents. The VEGF receptors have an extracellular portion consisting of 7 immunoglobulin-like domains, a single transmembrane spanning region, and an intracellular portion containing a split tyrosine-kinase domain. VEGF-A binds to VEGFR-1 (Flt-1) and VEGFR-2 (KDR/Flk-1). VEGFR-2 appears to mediate almost all of the known cellular responses to VEGF. The function of VEGFR-1 is less well-defined, although it is thought to modulate VEGFR-2 signaling. Another function of VEGFR-1 may be to act as a dummy/decoy receptor, sequestering VEGF from VEGFR-2 binding (this appears to be particularly important during vasculogenesis in the embryo). VEGF-C and VEGF-D, but not VEGF-A, are ligands for a third receptor (VEGFR-3/Flt4), which mediates lymphangiogenesis. The receptor (VEGFR3) is the site of binding of main ligands (VEGFC and VEGFD), which mediates perpetual action and function of ligands on
target cells. Vascular endothelial growth factor-C can stimulate lymphangiogenesis (via VEGFR3) and angiogenesis via VEGFR2. Vascular endothelial growth factor-R3 has been detected in lymphatic endothelial cells in CL of many species, cattle, buffalo and primate.
In addition to binding to VEGFRs, VEGF binds to receptor complexes consisting of both neuropilins and VEGFRs. This receptor complex has increased VEGF signalling activity in endothelial cells (blood vessels). Neuropilins (NRP) are pleiotropic receptors and therefore other molecules may interfere with the signalling of the NRP/VEGFR receptor complexes. For example, Class 3 semaphorins compete with VEGF165 for NRP binding and could therefore regulate VEGF-mediated angiogenesis.
Expression
VEGF-A production can be induced in a cell that is not receiving enough oxygen. When a cell is deficient in oxygen, it produces HIF, hypoxia-inducible factor, a transcription factor. HIF stimulates the release of VEGF-A, among other functions (including modulation of erythropoiesis). Circulating VEGF-A then binds to VEGF receptors on endothelial cells, triggering a tyrosine kinase pathway leading to angiogenesis. The expression of angiopoietin-2 in the absence of VEGF leads to endothelial cell death and vascular regression. Conversely, a German study done in vivo found that VEGF concentrations actually decreased after a 25% reduction in oxygen intake for 30 minutes. HIF1 alpha and HIF1 beta are constantly being produced but HIF1 alpha is highly O2 labile, so, in aerobic conditions, it is degraded. When the cell becomes hypoxic, HIF1 alpha persists and the HIF1alpha/beta complex stimulates VEGF release. the combined use of microvesicles and 5-FU resulted in enhanced chemosensitivity of squamous cell carcinoma cells more than the use of either 5-FU or microvesicle alone. In addition, down regulation of VEGF gene expression was associated with decreased CD1 gene expression.
Clinical significance
In disease
VEGF-A and the corresponding receptors are rapidly up-regulated after traumatic injury of the central nervous system (CNS). VEGF-A is highly expressed in the acute and sub-acute stages of CNS injury, but the protein expression declines over time. This time-span of VEGF-A expression corresponds with the endogenous re-vascularization capacity after injury. This would suggest that VEGF-A / VEGF165 could be used as target to promote angiogenesis after traumatic CNS injuries. However, there are contradicting scientific reports about the effects of VEGF-A treatments in CNS injury models.
Although it has not been associated as a biomarker for the diagnosis of acute ischemic stroke, high levels of serum VEGF in the first 48 hours after an cerebral infarct have been associated with a poor prognosis after 6 months and 2 years.
VEGF-A has been implicated with poor prognosis in breast cancer. Numerous studies show a decreased overall survival and disease-free survival in those tumors overexpressing VEGF. The overexpression of VEGF-A may be an early step in the process of metastasis, a step that is involved in the "angiogenic" switch. Although VEGF-A has been correlated with poor survival, its exact mechanism of action in the progression of tumors remains unclear.
VEGF-A is also released in rheumatoid arthritis in response to TNF-α, increasing endothelial permeability and swelling and also stimulating angiogenesis (formation of capillaries).
VEGF-A is also important in diabetic retinopathy (DR). The microcirculatory problems in the retina of people with diabetes can cause retinal ischaemia, which results in the release of VEGF-A, and a switch in the balance of pro-angiogenic VEGFxxx isoforms over the normally expressed VEGFxxxb isoforms. VEGFxxx may then cause the creation of new blood vessels in the retina and elsewhere in the eye, heralding changes that may threaten the sight.
VEGF-A plays a role in the disease pathology of the wet form age-related macular degeneration (AMD), which is the leading cause of blindness for the elderly of the industrialized world. The vascular pathology of AMD shares certain similarities with diabetic retinopathy, although the cause of disease and the typical source of neovascularization differs between the two diseases.
VEGF-D serum levels are significantly elevated in patients with angiosarcoma.
Once released, VEGF-A may elicit several responses. It may cause a cell to survive, move, or further differentiate. Hence, VEGF is a potential target for the treatment of cancer. The first anti-VEGF drug, a monoclonal antibody named bevacizumab, was approved in 2004. Approximately 10–15% of patients benefit from bevacizumab therapy; however, biomarkers for bevacizumab efficacy are not yet known.
Current studies show that VEGFs are not the only promoters of angiogenesis. In particular, FGF2 and HGF are potent angiogenic factors.
Patients suffering from pulmonary emphysema have been found to have decreased levels of VEGF in the pulmonary arteries.
VEGF-D has also been shown to be over expressed in lymphangioleiomyomatosis and is currently used as a diagnostic biomarker in the treatment of this rare disease.
In the kidney, increased expression of VEGF-A in glomeruli directly causes the glomerular hypertrophy that is associated with proteinuria.
VEGF alterations can be predictive of early-onset pre-eclampsia.
Gene therapies for refractory angina establish expression of VEGF in epicardial cells to promote angiogenesis.
See also
Proteases in angiogenesis
Withaferin A, a potent inhibitor of angiogenesis
References
Further reading
External links
– the Vascular Endothelial Growth Factor Structure in Interactive 3D
Angiogenesis
Drugs acting on the cardiovascular system
Growth factors
Neurotrophic factors
Human proteins | Vascular endothelial growth factor | [
"Chemistry",
"Biology"
] | 2,747 | [
"Growth factors",
"Angiogenesis",
"Signal transduction",
"Neurotrophic factors",
"Neurochemistry"
] |
1,858,612 | https://en.wikipedia.org/wiki/Critical%20micelle%20concentration | In colloidal and surface chemistry, the critical micelle concentration (CMC) is defined as the concentration of surfactants above which micelles form and all additional surfactants added to the system will form micelles.
The CMC is an important characteristic of a surfactant. Before reaching the CMC, the surface tension changes strongly with the concentration of the surfactant. After reaching the CMC, the surface tension remains relatively constant or changes with a lower slope. The value of the CMC for a given dispersant in a given medium depends on temperature, pressure, and (sometimes strongly) on the presence and concentration of other surface active substances and electrolytes. Micelles only form above critical micelle temperature.
For example, the value of CMC for sodium dodecyl sulfate in water (without other additives or salts) at 25 °C, atmospheric pressure, is 8x10−3 mol/L.
Description
Upon introducing surfactants (or any surface active materials) into a system, they will initially partition into the interface, reducing the system free energy by:
lowering the energy of the interface (calculated as area times surface tension), and
removing the hydrophobic parts of the surfactant from contact with water.
Subsequently, when the surface coverage by the surfactants increases, the surface free energy (surface tension) decreases and the surfactants start aggregating into micelles, thus again decreasing the system's free energy by decreasing the contact area of hydrophobic parts of the surfactant with water. Upon reaching CMC, any further addition of surfactants will just increase the number of micelles (in the ideal case).
According to one well-known definition, CMC is the total concentration of surfactants under the conditions:
if C = CMC, (d3/dCt3) = 0
= A[Cs] + B[Cm]; i.e., in words Cs = [single surfactant ion] , Cm = [micelles] and A and B are proportionality constants
Ct = Cs + NCm; i.e., N = represents the number of detergent ions per micelle
Measurement
The CMC generally depends on the method of measuring the samples, since A and B depend on the properties of the solution such as conductance, photochemical characteristics, or surface tension. When the degree of aggregation is monodisperse, then the CMC is not related to the method of measurement. On the other hand, when the degree of aggregation is polydisperse, then CMC is related to both the method of measurement and the dispersion.
The common procedure to determine the CMC from experimental data is to look for the intersection (inflection point) of two straight lines traced through plots of the measured property versus the surfactant concentration. This visual data analysis method is highly subjective and can lead to very different CMC values depending on the type of representation, the quality of the data and the chosen interval around the CMC. A preferred method is the fit of the experimental data with a model of the measured property. Fit functions for properties such as electrical conductivity, surface tension, NMR chemical shifts, absorption, self-diffusion coefficients, fluorescence intensity and mean translational diffusion coefficient of fluorescent dyes in surfactant solutions have been presented. These fit functions are based on a model for the concentrations of monomeric and micellised surfactants in solution, which establishes a well-defined analytical definition of the CMC, independent from the technique.
The CMC is the concentration of surfactants in the bulk at which micelles start forming. The word bulk is important because surfactants partition between the bulk and interface and CMC is independent of interface and is therefore a characteristic of the surfactant molecule. In most situations, such as surface tension measurements or conductivity measurements, the amount of surfactant at the interface is negligible compared to that in the bulk and CMC can be approximated by the total concentration. In practice, CMC data is usually collected using laboratory instruments which allow the process to be partially automated, for instance by using specialised tensiometers.
Practical considerations
When the interfacial areas are large, the amount of surfactant at the interface cannot be neglected. If, for example, air bubbles are introduced into a solution of a surfactant above CMC, these bubbles, as they rise to the surface, remove surfactants from the bulk to the top of the solution creating a foam column and thus reducing the concentration in bulk to below CMC. This is one of the easiest methods to remove surfactants from effluents (see foam flotation). Thus in foams with sufficient interfacial area are devoid of micelles. Similar reasoning holds for emulsions.
The other situation arises in detergents. One initially starts off with concentrations greater than CMC in water and on adding fabric with large interfacial area, the surfactant concentration drops below CMC and no micelles remain at equilibrium. Therefore, the solubilization plays a minor role in detergents. Removal of oily soil occurs by modification of the contact angles and release of oil in the form of emulsion.
In petroleum industry, CMC is considered prior to injecting surfactant in reservoir regarding enhanced oil recovery (EOR) application. Below the CMC point, interfacial tension between oil and water phase is no longer effectively reduced. If the concentration of the surfactant is kept a little above the CMC, the additional amount covers the dissolution with existing brine in the reservoir. It is desired that the surfactant will work at the lowest interfacial tension (IFT).
See also
Detergent
Micelle
Surface tension
Surfactant
References
External links
Theory of CMC measurement
CMCs and molecular weights of several detergents on OpenWetWare
Colloidal chemistry | Critical micelle concentration | [
"Chemistry"
] | 1,222 | [
"Colloidal chemistry",
"Surface science",
"Colloids"
] |
1,859,146 | https://en.wikipedia.org/wiki/PUREX | PUREX (plutonium uranium reduction extraction) is a chemical method used to purify fuel for nuclear reactors or nuclear weapons. PUREX is the de facto standard aqueous nuclear reprocessing method for the recovery of uranium and plutonium from used nuclear fuel (spent nuclear fuel, or irradiated nuclear fuel). It is based on liquid–liquid extraction ion-exchange.
PUREX is applied to spent nuclear fuel, which consists primarily of very high atomic-weight (actinoid or "actinide") elements (e.g. uranium, plutonium, americium) along with smaller amounts of material composed of lighter atoms, notably the fission products produced by reactor operation.
The actinoid elements in this case consist primarily of the unconsumed remains of the original fuel (typically U-235, U-238, and/or Pu-239).
Chemical process
The fuel is first dissolved in nitric acid at a concentration around 7 M. Solids are removed by filtration to avoid the formation of emulsions, referred to as third phases in the solvent extraction community.
The organic solvent consists of 30% tributyl phosphate (TBP) in a hydrocarbon such as kerosene. Uranyl(VI) ions are extracted in the organic phase as UO2(NO3)2·2TBP complexes; plutonium is extracted as similar complexes. The heavier actinides, primarily americium and curium, and the fission products remain in the aqueous phase. The nature of uranyl nitrate complexes with trialkyl phosphates has been characterized.
Plutonium is separated from uranium by treating the TBP-kerosene solution with reducing agents to convert the plutonium to its +3 oxidation state, which will pass into the aqueous phase. Typical reducing agents include N,N-diethyl-hydroxylamine, ferrous sulphamate, and hydrazine. Uranium is then stripped from the kerosene solution by back-extraction into nitric acid at a concentration around 0.2 M.
PUREX raffinate
The term PUREX raffinate describes the mixture of metals in nitric acid which are left behind when the uranium and plutonium have been removed by the PUREX process from a nuclear fuel dissolution liquor. This mixture is often known as high level nuclear waste.
Two PUREX raffinates exist. The most highly active raffinate from the first cycle is the one which is most commonly known as PUREX raffinate. The other is from the medium-active cycle in which the uranium and plutonium are refined by a second extraction with tributyl phosphate.
Deep blue is the bulk ions, light blue is the fission products (group I is Rb/Cs) (group II is Sr/Ba) (group III is Y and the lanthanides), orange is the corrosion products (from stainless steel pipework), green are the major actinides, violet are the minor actinides and magenta is the neutron poison)
Currently PUREX raffinate is stored in stainless steel tanks before being converted into glass. The first cycle PUREX raffinate is very radioactive. It has almost all of the fission products, corrosion products such as iron/nickel, traces of uranium, plutonium and the minor actinides.
Pollution
The PUREX plant at the Hanford Site was responsible for producing 'copious volumes of liquid wastes', resulting in the radioactive contamination of groundwater.
Greenpeace measurements in La Hague and Sellafield indicated that radioactive pollutants are steadily released into the sea, and the air. Therefore, people living near these processing plants are exposed to higher radiation levels than the naturally occurring background radiation. According to Greenpeace, this additional radiation is small but not negligible.
History
The PUREX process was invented by Herbert H. Anderson and Larned B. Asprey at the Metallurgical Laboratory at the University of Chicago, as part of the Manhattan Project under Glenn T. Seaborg; their patent "Solvent Extraction Process for Plutonium" filed in 1947, mentions tributyl phosphate as the major reactant which accomplishes the bulk of the chemical extraction.
List of nuclear reprocessing sites
La Hague site
Mayak
Thermal Oxide Reprocessing Plant and B205 at Sellafield
Tokai, Ibaraki
West Valley Reprocessing Plant
Savannah River Site
Hanford Site
Idaho Chemical Processing Plant, (now Idaho National Laboratory)
Radiochemical Engineering Development Center, Oak Ridge National Laboratory
See also
Nuclear fuel cycle
Nuclear breeder reactor
Spent nuclear fuel shipping cask
Global Nuclear Energy Partnership announced February, 2006
References and notes
Further reading
OECD Nuclear Energy Agency, The Economics of the Nuclear Fuel Cycle, Paris, 1994
I. Hensing and W Schultz, Economic Comparison of Nuclear Fuel Cycle Options, Energiewirtschaftlichen Instituts, Cologne, 1995.
Cogema, Reprocessing-Recycling: the Industrial Stakes, presentation to the Konrad-Adenauer-Stiftung, Bonn, 9 May 1995.
OECD Nuclear Energy Agency, Plutonium Fuel: An Assessment, Paris, 1989.
National Research Council, "Nuclear Wastes: Technologies for Separation and Transmutation", National Academy Press, Washington D.C. 1996.
External links
Processing of Used Nuclear Fuel , World Nuclear Association
Reactor-Grade Plutonium and Development of Nuclear Weapons, Analytical Center for Non-proliferation
PUREX Process, European Nuclear Society
Mixed Oxide Fuel (MOX) – World Nuclear Association
Disposal Options for Surplus Weapons-Usable Plutonium – Congressional Research Service Report for Congress
Brief History of Fuel Reprocessing
Radioactive waste
Waste treatment technology
Nuclear chemistry
Nuclear reprocessing
Plutonium
Uranium | PUREX | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 1,177 | [
"Nuclear physics",
"Water treatment",
"Nuclear chemistry",
"Radioactive waste",
"Hazardous waste",
"Environmental impact of nuclear power",
"nan",
"Environmental engineering",
"Waste treatment technology",
"Radioactivity"
] |
6,206,429 | https://en.wikipedia.org/wiki/Customer%20engineer | A customer engineer (CE) is a worker whose primary job scope is to provide a service to customers who have signed a contract with the company. Originally, the term was used by IBM, but now customer engineer is also being used by other companies.
About
Customer engineers are also referred to as customer support engineers or customer service engineers. Most customer engineers provide corporate technical assistance which includes debugging mainframe computers and developing outdated products. Customer engineers need to keep up to date on the latest technical developments in their company's customers. Part of their job description requires servicing special equipment that has broken down or seemingly run its course. Many of the engineers are asked to repair specific equipment and others focus on just helping clients. The role of customer engineer has spread to other companies in different industries such as technology, aviation, and telecommunications. Customer engineers are in charge of repairing large-scale networking problems.
IBM customer engineer (IBM CE)
Originally simply engineer, those who specialized in servicing IBM equipment in use by its customers were designated customer engineers by Tom Watson circa 1942.
Based on the requirements, an IBM CE could be a Field CE and service many customers around a defined territory, e.g.: Kuala Lumpur, or they could be based at the place of business of a particularly large customer and service only that one customer e.g.: Tenaga Nasional.
NCR customer engineer (NCR CE)
The title of CE or customer engineer is used by National Cash Register (NCR) to designate specially trained personnel who are charged with the installation, maintenance and repair of equipment and systems according to contracts with end users. They can be charged with service of either point of sales (POS) systems or with ATM machines.
References
Engineering occupations
Computing terminology | Customer engineer | [
"Technology"
] | 352 | [
"Computing terminology"
] |
6,206,889 | https://en.wikipedia.org/wiki/VerilogCSP | In integrated circuit design, VerilogCSP is a set of macros added to Verilog HDL to support Communicating Sequential Processes (CSP) channel communications. These macros are intended to be used in designing digital asynchronous circuits.
VerilogCSP also describes nonlinear pipelines and high-level channel timing properties, such as forward and backward latencies, minimum cycle time, and slack.
External links
VerilogCSP Homepage
References
Hardware description languages | VerilogCSP | [
"Engineering"
] | 101 | [
"Electronic engineering",
"Hardware description languages"
] |
6,212,640 | https://en.wikipedia.org/wiki/Faug%C3%A8re%27s%20F4%20and%20F5%20algorithms | In computer algebra, the Faugère F4 algorithm, by Jean-Charles Faugère, computes the Gröbner basis of an ideal of a multivariate polynomial ring. The algorithm uses the same mathematical principles as the Buchberger algorithm, but computes many normal forms in one go by forming a generally sparse matrix and using fast linear algebra to do the reductions in parallel.
The Faugère F5 algorithm first calculates the Gröbner basis of a pair of generator polynomials of the ideal. Then it uses this basis to reduce the size of the initial matrices of generators for the next larger basis:
If Gprev is an already computed Gröbner basis (f2, …, fm) and we want to compute a Gröbner basis of (f1) + Gprev then we will construct matrices whose rows are m f1 such that m is a monomial not divisible by the leading term of an element of Gprev.
This strategy allows the algorithm to apply two new criteria based on what Faugère calls signatures of polynomials. Thanks to these criteria, the algorithm can compute Gröbner bases for a large class of interesting polynomial systems, called regular sequences, without ever simplifying a single polynomial to zero—the most time-consuming operation in algorithms that compute Gröbner bases. It is also very effective for a large number of non-regular sequences.
Implementations
The Faugère F4 algorithm is implemented
in FGb, Faugère's own implementation, which includes interfaces for using it from C/C++ or Maple,
in Maple computer algebra system, as the option method=fgb of function Groebner[gbasis]
in the Magma computer algebra system,
in the SageMath computer algebra system,
Study versions of the Faugère F5 algorithm is implemented in
the SINGULAR computer algebra system;
the SageMath computer algebra system.
in SymPy Python package.
Applications
The previously intractable "cyclic 10" problem was solved by F5, as were a number of systems related to cryptography; for example HFE and C*.
References
Till Stegers Faugère's F5 Algorithm Revisited (alternative link). Diplom-Mathematiker Thesis, advisor Johannes Buchmann, Technische Universität Darmstadt, September 2005 (revised April 27, 2007). Many references, including links to available implementations.
External links
Faugère's home page (includes pdf reprints of additional papers)
An introduction to the F4 algorithm.
Computer algebra | Faugère's F4 and F5 algorithms | [
"Mathematics",
"Technology"
] | 521 | [
"Computer science",
"Computer algebra",
"Computational mathematics",
"Algebra"
] |
23,775,396 | https://en.wikipedia.org/wiki/Noro%E2%80%93Frenkel%20law%20of%20corresponding%20states | The Noro–Frenkel law of corresponding states is an equation in thermodynamics that describes the critical temperature of the liquid-gas transition T as a function of the range of the attractive potential R. It states that all short-ranged spherically symmetric pair-wise additive attractive potentials are characterised by the same thermodynamics properties, if compared at the same reduced density and second virial coefficient
Description
Johannes Diderik van der Waals's law of corresponding states expresses the fact that there are basic similarities in the thermodynamic properties of all simple gases. Its essential feature is that if we scale the thermodynamic variables that describe an equation of state (temperature, pressure, and volume) with respect to their values at the liquid-gas critical point, all simple fluids obey the same reduced equation of state.
Massimo G. Noro and Daan Frenkel formulated an extended law of corresponding states that predicts the phase behaviour of short-ranged potentials on the basis of the effective pair potential alone – extending the validity of the van der Waals law to systems interacting through pair potentials with different functional forms.
The Noro–Frenkel law suggests to condensate the three quantities which are expected to play a role in the thermodynamics behavior of a system (hard-core size, interaction energy and range) into a combination of only two quantities: an effective hard core diameter and the reduced second virial coefficient. Noro and Frenkel suggested to determine the effective hard core diameter following the expression suggested by Barker based on the separation of the potential into attractive Vatt and repulsive Vrep parts used in the Weeks–Chandler– Andersen method. The reduced second virial coefficient, i.e., the second virial coefficient B2 divided by the second virial coefficient of hard spheres with the effective diameter can be calculated (or experimentally measured) once the potential is known. B2 is defined as
Applications
The Noro–Frenkel law is particularly useful for the description of colloidal and globular protein solutions, for which the range of the potential is indeed significantly smaller than the particle size. For these systems the thermodynamic properties can be re-written as a function of only two parameters, the reduced density (using the effective diameter as length scale) and the reduced second-virial coefficient B. The gas-liquid critical point of all systems satisfying the extended law of corresponding states are characterized by same values of B at the critical point.
The Noro-Frenkel law can be generalized to particles with limited valency (i.e. to non spherical interactions). Particles interacting with different potential ranges but identical valence behave again according to the generalized law, but with a different value for each valence of B at the critical point.
See also
Equation of states
Van der Waals equation
References
Thermodynamic properties
Condensed matter physics
Thermodynamic equations | Noro–Frenkel law of corresponding states | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 606 | [
"Thermodynamic properties",
"Thermodynamic equations",
"Equations of physics",
"Physical quantities",
"Quantity",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Thermodynamics",
"Matter"
] |
23,777,515 | https://en.wikipedia.org/wiki/Dopant | A dopant (also called a doping agent) is a small amount of a substance added to a material to alter its physical properties, such as electrical or optical properties. The amount of dopant is typically very low compared to the material being doped.
When doped into crystalline substances, the dopant's atoms get incorporated into the crystal lattice of the substance. The crystalline materials are frequently either crystals of a semiconductor such as silicon and germanium for use in solid-state electronics, or transparent crystals for use in the production of various laser types; however, in some cases of the latter, noncrystalline substances such as glass can also be doped with impurities.
In solid-state electronics using the proper types and amounts of dopants in semiconductors is what produces the p-type semiconductors and n-type semiconductors that are essential for making transistors and diodes.
Transparent crystals
Lasing media
The procedure of doping tiny amounts of the metals chromium (Cr), neodymium (Nd), erbium (Er), thulium (Tm), ytterbium (Yb), and a few others, into transparent crystals, ceramics, or glasses is used to produce the active medium for solid-state lasers. It is in the electrons of the dopant atoms that a population inversion can be produced, and this population inversion is essential for the stimulated emission of photons in the operation of all lasers.
In the case of the natural ruby, what has occurred is that a tiny amount of chromium dopant has been naturally distributed through a crystal of aluminium oxide (corundum). This chromium both gives a ruby its red color, and also enables a ruby to undergo a population inversion and act as a laser. The aluminium and oxygen atoms in the transparent crystal of aluminium oxide served simply to support the chromium atoms in a good spatial distribution, and otherwise, they do not have anything to do with the laser action.
In other cases, such as in the neodymium YAG laser, the crystal is synthetically made and does not occur in nature. The human-made yttrium aluminium garnet crystal contains millions of yttrium atoms in it, and due to its physical size, chemical valence, etc., it works well to take the place of a small minority of yttrium atoms in its lattice, and to replace them with atoms from the rare-earth series of elements, such as neodymium. Then, these dopant atoms actually carry out the lasing process in the crystal. The rest of the atoms in the crystal consist of yttrium, aluminium, and oxygen atoms, but just as above, these other three elements function to simply support the neodymium atoms. In addition, the rare-earth element erbium can readily be used as the dopant rather than neodymium, giving a different wavelength of its output.
In many optically-transparent hosts, such active centers may keep their excitation for a time on the order of milliseconds, and relax with stimulated emission, providing the laser action. The amount of dopant is usually measured in atomic percent. Usually the relative atomic percent is assumed in the calculations, taking into account that the dopant ion can substitute in only part of a site in a crystalline lattice. The doping can be also used to change the refraction index in optical fibers, especially in the double-clad fibers. The optical dopants are characterized with lifetime of excitation and the effective absorption and emission cross-sections, which are main parameters of an active dopant. Usually, the concentration of optical dopant is of order of few percent or even lower. At large density of excitation, the cooperative quenching (cross-relaxation) reduces the efficiency of the laser action.
Examples
The medical field has some use for erbium-doped laser crystals for the laser scalpels that are used in laser surgery. Europium, neodymium, and other rare-earth elements are used to dope glasses for lasers. Holmium-doped and neodymium yttrium aluminium garnets (YAGs) are used as the active laser medium in some laser scalpels.
Phosphors and scintillators
In context of phosphors and scintillators, dopants are better known as activators, and are used to enhance the luminescence process.
Semiconductors
The addition of a dopant to a semiconductor, known as doping, has the effect of shifting the Fermi levels within the material. This results in a material with predominantly negative (n-type) or positive (p-type) charge carriers depending on the dopant variety. Pure semiconductors that have been altered by the presence of dopants are known as extrinsic semiconductors (see intrinsic semiconductor). Dopants are introduced into semiconductors in a variety of techniques: solid sources, gases, spin on liquid, and ion implanting. See ion implantation, surface diffusion, and solid sources footnote.
Others
The color of some gemstones is caused by dopants. For example, ruby and sapphire are both aluminium oxide, the former getting its red color from chromium atoms, and the latter doped with any of several elements, giving a variety of colors.
See also
List of semiconductor materials
References
Semiconductor properties
Materials science
Physical properties
Chemical properties
Crystals | Dopant | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,124 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Semiconductor properties",
"Materials science",
"Crystallography",
"Crystals",
"Condensed matter physics",
"nan",
"Physical properties"
] |
23,778,755 | https://en.wikipedia.org/wiki/Bioactive%20paper | Bioactive paper is a paper-based sensor that can identify various contaminants in food and water. First developed in 2009, bioactive paper research has been ongoing and in 2011 was awarded a 5-year grant totalling $7.5 million CAD. It has been developed at the biosensor stage level, which means it can detect pesticides but is not yet able to repel and deactivate toxins. However, its ability to detect potential hazards has applications for human health and safety. The benefits of bioactive paper are that it is simple, portable, disposable, and inexpensive.
Development
Bioactive paper was developed by Canada’s Sentinel Bioactive Paper Network, a consortium of researchers, industrial and university, partners, and students. The Network is hosted by McMaster University in Hamilton, Ontario and is led by Dr. Robert Pelton, scientific director and Dr. George Rosenberg, managing director.
John Brennan and his research team at McMaster University developed the method to create bioactive paper by printing contaminant-detecting biosensors that are based on combinations of antibodies, enzymes, aptamers or bacteriophages, onto the structure of the paper. These combinations then attach themselves to pathogens and other contaminants resulting in a detectable response. The biologically active chemicals are in the form of an ‘ink’ that can be printed, coated or impregnated onto or into paper using existing paper-making and high-speed printing processes. This ink is coated in different layers. The ink is similar to that found in a regular computer print cartridge, but it has special additives that make it biocompatible.
It is made up of biocompatible silica nanoparticles that are deposited onto the paper first, then another ink containing the enzyme is applied. The bio-ink result forms a thin film of enzyme that is trapped in the silica on the paper.
When the paper is exposed to a toxin, molecules in the ink change colour based on the amount of toxins in the sample.
While bioactive paper is not available to the public yet, it is getting closer to commercialization. Bioactive paper also has a good shelf life. Researchers said the strip could still be used effectively for at least two months when stored properly.
Applications
One current application of bioactive paper can be applied to bioterrorism and food safety, as it can detect acetylcholinesterase, or a nerve agent. With this advancement, bioactive paper has become a product of interest for the military and the packaging industry. While efforts are underway to develop more applications of bioactive paper, there are currently four major areas of bioactive paper usage and research: paper-based bioassay or paper-based analytical devices for sample conditioning;pathogen detection for food and water quality monitoring; counterfeiting and countertempering in the packaging and construction industries; and deactivation of pathogenic bacteria using antimicrobial paper.
Food-borne illness
Approximately 76-million food-borne illnesses occur each in the United States, accounting for more than 325,000 hospitalizations and 5,000 deaths [Mead et al., 1999]. Most of these illnesses are caused by Campylobacter, Salmonella, Escherichia coli O157:H7 and Listeria monocytogenes. As a result, annual medical expenditures related to these pathogens currently exceed $7 billion US. Consumer education coupled with reliable and simple pathogen detection in food products offers the best method for dramatically reducing the frequency of occurrence of these illnesses.
The most recent development involved being able to detect pesticides on food even after they’ve been washed. This innovation is a benefit to developing countries that may use banned pesticides on their food because they’re cheaper.
Water contamination
In the developing world, the water is often of questionable quality, forcing the local population to try rudimentary filtration systems, such as the use of unsanitary cloth in a vain attempt to create potable water. This method is obviously not reliable and the results are rarely safe for consumption, particularly after floods and other natural disasters. Think of the benefits of a bioactive paper strip which, when dipped in small containers of water, can remove pathogens and give the user a colour indication that the water is safe to use.
Health care
Another potential use of bioactive paper includes the creation of face masks that protect health care workers by actively binding viruses and anchoring them to the filter surface which would prevent them from passing through the filter’s pores.
Packaging
Because it can easily test for certain components, there is interest for bioactive paper to be used in the packaging industry. Specifically, companies are considering bioactive paper as a way to detect counterfeit items or tampering. Other uses include microbial detection or possible antimicrobial properties.
References
Hossain, S.M.Z.; Luckham R.E.; Smith, A.M.; Lebert, J.M.; Davies, L.M.; Pelton, R.; Filipe, C.D.M.; Brennan, J.D. “Development of a bioactive paper sensor for detection of neurotoxins using piezoelectric inkjet printing of sol-gel derived bioinks.” Anal. Chem., 2009, to appear.
Jabrane, T.;Dube M.; Mangin, P.J. “Bacteriophage immobilization on paper surface: Effect of cationic pre-coat layer.” In PAPTAC 95th Annual Meeting, Montreal, 2009; pp 311–315.
Pelton, R. “Bioactive paper – a low cost platform for diagnostics.” Trends Anal. Chem. 2009, to appear.
Zhao, W.A.; Ali, M.M.; Aguirre, S.D.; Brook, M.A.; Li, Y.F. “Paper-based bioassays using gold nanoparticle colorimetric probes.” Anal. Chem., 2008, 80 (22): 8431-8437.
Zhao, W.; Brook, M.A.; Li, Y.F. “Design of gold nanoparticle-based colorimetric biosensing assays.” ChemBioChem, 2008, 9 (15), 2363-2371.
Zhao, W.; Chiuman, W.; Lam J.C.F.; Mcmanus, S.A.; Chen, W.; Cui, Y; Pelton, R.; Brook, M.A.; Li, Y. “DNA aptamer folding on gold nanoparticles: From colloid chemistry to biosensors.” J. Am. Chem., 2008, 130 (11): 3610-3618.
Pesticides
Bioactivity
Paper
Sensors
2009 introductions
Canadian inventions
Food safety
Biosensors
Water quality indicators | Bioactive paper | [
"Chemistry",
"Technology",
"Engineering",
"Biology",
"Environmental_science"
] | 1,450 | [
"Pesticides",
"Toxicology",
"Measuring instruments",
"Water pollution",
"Biosensors",
"Water quality indicators",
"Sensors",
"Biocides"
] |
23,781,851 | https://en.wikipedia.org/wiki/Gyrator%E2%80%93capacitor%20model | The gyrator–capacitor model - sometimes also the capacitor-permeance model - is a lumped-element model for magnetic circuits, that can be used in place of the more common resistance–reluctance model. The model makes permeance elements analogous to electrical capacitance (see magnetic capacitance section) rather than electrical resistance (see magnetic reluctance). Windings are represented as gyrators, interfacing between the electrical circuit and the magnetic model.
The primary advantage of the gyrator–capacitor model compared to the magnetic reluctance model is that the model preserves the correct values of energy flow, storage and dissipation. The gyrator–capacitor model is an example of a group of analogies that preserve energy flow across energy domains by making power conjugate pairs of variables in the various domains analogous. It fills the same role as the impedance analogy for the mechanical domain.
Nomenclature
Magnetic circuit may refer to either the physical magnetic circuit or the model magnetic circuit. Elements and dynamical variables that are part of the model magnetic circuit have names that start with the adjective magnetic, although this convention is not strictly followed. Elements or dynamical variables in the model magnetic circuit may not have a one to one correspondence with components in the physical magnetic circuit. Symbols for elements and variables that are part of the model magnetic circuit may be written with a subscript of M. For example, would be a magnetic capacitor in the model circuit.
Electrical elements in an associated electrical circuit may be brought into the magnetic model for ease of analysis. Model elements in the magnetic circuit that represent electrical elements are typically the electrical dual of the electrical elements. This is because transducers between the electrical and magnetic domains in this model are usually represented by gyrators. A gyrator will transform an element into its dual. For example, a magnetic inductance may represent an electrical capacitance.
Summary of analogy between magnetic circuits and electrical circuits
The following table summarizes the mathematical analogy between electrical circuit theory and magnetic circuit theory.
Gyrator
A gyrator is a two-port element used in network analysis. The gyrator is the complement of the transformer; whereas in a transformer, a voltage on one port will transform to a proportional voltage on the other port, in a gyrator, a voltage on one port will transform to a current on the other port, and vice versa.
The role gyrators play in the gyrator–capacitor model is as transducers between the electrical energy domain and the magnetic energy domain. An emf in the electrical domain is analogous to an mmf in the magnetic domain, and a transducer doing such a conversion would be represented as a transformer. However, real electro-magnetic transducers usually behave as gyrators. A transducer from the magnetic domain to the electrical domain will obey Faraday's law of induction, that is, a rate of change of magnetic flux (a magnetic current in this analogy) produces a proportional emf in the electrical domain. Similarly, a transducer from the electrical domain to the magnetic domain will obey Ampère's circuital law, that is, an electric current will produce a mmf.
A winding of N turns is modeled by a gyrator with a gyration resistance of N ohms.
Transducers that are not based on magnetic induction may not be represented by a gyrator. For instance, a Hall effect sensor is modelled by a transformer.
Magnetic voltage
Magnetic voltage, , is an alternate name for magnetomotive force (mmf), (SI unit: A or amp-turn), which is analogous to electrical voltage in an electric circuit. Not all authors use the term magnetic voltage. The magnetomotive force applied to an element between point A and point B is equal to the line integral through the component of the magnetic field strength, .
The resistance–reluctance model uses the same equivalence between magnetic voltage and magnetomotive force.
Magnetic current
Magnetic current, , is an alternate name for the time rate of change of flux, (SI unit: Wb/sec or volts), which is analogous to electrical current in an electric circuit. In the physical circuit, , is magnetic displacement current. The magnetic current flowing through an element of cross section, , is the area integral of the magnetic flux density .
The resistance–reluctance model uses a different equivalence, taking magnetic current to be an alternate name for flux, . This difference in the definition of magnetic current is the fundamental difference between the gyrator-capacitor model and the resistance–reluctance model. The definition of magnetic current and magnetic voltage imply the definitions of the other magnetic elements.
Magnetic capacitance
Magnetic capacitance is an alternate name for permeance, (SI unit: H). It is represented by a capacitance in the model magnetic circuit. Some authors use to denote magnetic capacitance while others use and refer to the capacitance as a permeance. Permeance of an element is an extensive property defined as the magnetic flux, , through the cross sectional surface of the element divided by the magnetomotive force, , across the element'
For a bar of uniform cross-section, magnetic capacitance is given by,
where:
is the magnetic permeability,
is the element cross-section, and
is the element length.
For phasor analysis, the magnetic permeability and the permeance are complex values.
Permeance is the reciprocal of reluctance.
Magnetic inductance
In the context of the gyrator-capacitor model of a magnetic circuit, magnetic inductance (SI unit: F) is the analogy to inductance in an electrical circuit.
For phasor analysis the magnetic inductive reactance is:
where:
is the magnetic inductance
is the angular frequency of the magnetic circuit
In the complex form it is a positive imaginary number:
The magnetic potential energy sustained by magnetic inductance varies with the frequency of oscillations in electric fields. The average power in a given period is equal to zero. Due to its dependence on frequency, magnetic inductance is mainly observable in magnetic circuits which operate at VHF and/or UHF frequencies.
The notion of magnetic inductance is employed in analysis and computation of circuit behavior in the gyrator–capacitor model in a way analogous to inductance in electrical circuits.
A magnetic inductor can represent an electrical capacitor. A shunt capacitance in the electrical circuit, such as intra-winding capacitance can be represented as a series inductance in the magnetic circuit.
Examples
Three phase transformer
This example shows a three-phase transformer modeled by the gyrator-capacitor approach. The transformer in this example has three primary windings and three secondary windings. The magnetic circuit is split into seven reluctance or permeance elements. Each winding is modeled by a gyrator. The gyration resistance of each gyrator is equal to the number of turns on the associated winding. Each permeance element is modeled by a capacitor. The value of each capacitor in farads is the same as the inductance of the associated permeance in henrys.
N1, N2, and N3 are the number of turns in the three primary windings. N4, N5, and N6 are the number of turns in the three secondary windings. Φ1, Φ2, and Φ3 are the fluxes in the three vertical elements. Magnetic flux in each permeance element in webers is numerically equal to the charge in the associate capacitance in coulombs. The energy in each permeance element is the same as the energy in the associated capacitor.
The schematic shows a three phase generator and a three phase load in addition to the schematic of the transformer model.
Transformer with gap and leakage flux
The gyrator-capacitor approach can accommodate leakage inductance and air gaps in the magnetic circuit. Gaps and leakage flux have a permeance which can be added to the equivalent circuit as capacitors. The permeance of the gap is computed in the same way as the substantive elements, except a relative permeability of unity is used. The permeance of the leakage flux may be difficult to compute due to complex geometry. It may be computed from other considerations such as measurements or specifications.
CPL and CSL represent the primary and secondary leakage inductance respectively. CGAP represents the air gap permeance.
Magnetic impedance
Magnetic complex impedance
Magnetic complex impedance, also called full magnetic resistance, is the quotient of a complex sinusoidal magnetic tension (magnetomotive force, ) on a passive magnetic circuit and the resulting complex sinusoidal magnetic current () in the circuit. Magnetic impedance is analogous to electrical impedance.
Magnetic complex impedance (SI unit: S) is determined by:
where is the modulus of and is its phase. The argument of a complex magnetic impedance is equal to the difference of the phases of the magnetic tension and magnetic current.
Complex magnetic impedance can be presented in following form:
where is the real part of the complex magnetic impedance, called the effective magnetic resistance, and is the imaginary part of the complex magnetic impedance, called the reactive magnetic resistance.
The magnetic impedance is equal to
Magnetic effective resistance
Magnetic effective resistance is the real component of complex magnetic impedance. This causes a magnetic circuit to lose magnetic potential energy. Active power in a magnetic circuit equals the product of magnetic effective resistance and magnetic current squared .
The magnetic effective resistance on a complex plane appears as the side of the resistance triangle for magnetic circuit of an alternating current. The effective magnetic resistance is bounding with the effective magnetic conductance by the expression
where is the full magnetic impedance of a magnetic circuit.
Magnetic reactance
Magnetic reactance is the parameter of a passive magnetic circuit, or an element of the circuit, which is equal to the square root of the difference of squares of the magnetic complex impedance and magnetic effective resistance to a magnetic current, taken with the sign plus, if the magnetic current lags behind the magnetic tension in phase, and with the sign minus, if the magnetic current leads the magnetic tension in phase.
Magnetic reactance is the component of magnetic complex impedance of the alternating current circuit, which produces the phase shift between a magnetic current and magnetic tension in the circuit. It is measured in units of and is denoted by (or ). It may be inductive or capacitive , where is the angular frequency of a magnetic current, is the magnetic inductiance of a circuit, is the magnetic capacitance of a circuit. The magnetic reactance of an undeveloped circuit with the inductance and the capacitance which are connected in series, is equal: . If , then the net reactance and resonance takes place in the circuit. In the general case . When an energy loss is absent (), . The angle of the phase shift in a magnetic circuit . On a complex plane, the magnetic reactance appears as the side of the resistance triangle for circuit of an alternating current.
Limitations of the analogy
The limitations of this analogy between magnetic circuits and electric circuits include the following;
The current in typical electric circuits is confined to the circuit, with very little "leakage". In typical magnetic circuits not all of the magnetic field is confined to the magnetic circuit because magnetic permeability also exists outside materials (see vacuum permeability). Thus, there may be significant "leakage flux" in the space outside the magnetic cores. If the leakage flux is small compared to the main circuit, it can often be represented as additional elements. In extreme cases, a lumped-element model may not be appropriate at all, and field theory is used instead.
Magnetic circuits are nonlinear; the permeance in a magnetic circuit is not constant, unlike capacitance in an electrical circuit, but varies depending on the magnetic field. At high magnetic fluxes the ferromagnetic materials used for the cores of magnetic circuits saturate, limiting further increase of the magnetic flux, so above this level the permeance decreases rapidly. In addition, the flux in ferromagnetic materials is subject to hysteresis; it depends not just on the instantaneous MMF but also on the history of MMF. After the source of the magnetic flux is turned off, remanent magnetism is left in ferromagnetic materials, creating flux with no MMF.
References
Electronic engineering
Electrical analogies
Magnetic circuits | Gyrator–capacitor model | [
"Technology",
"Engineering"
] | 2,630 | [
"Electrical engineering",
"Electronic engineering",
"Computer engineering"
] |
23,784,574 | https://en.wikipedia.org/wiki/Circulatory%20system%20of%20gastropods | As in other molluscs, the circulatory system of gastropods is open, with the fluid, or haemolymph, flowing through sinuses and bathing the tissues directly. The haemolymph typically contains haemocyanin, and is blue in colour.
Circulation
The heart is muscular and located in the anterior part of the visceral mass. In the great majority of species, it has two chambers; an auricle, which receives haemolymph from the gill or lung, and a ventricle, which pumps it into the aorta. However, some primitive gastropods possess two gills, each supplying its own auricle, so that their heart has three chambers.
The aorta is relatively short, and soon divides into two main vessels, one supplying the visceral mass, and the other supplying the head and foot. In some groups, these two vessels arise directly from the heart, so that the animal may be said to have two aortas. These two vessels in turn divide into many finer vessels throughout the body, and deliver haemolymph to open arterial sinuses where it bathes and oxygenates the tissues.
De-oxygenated haemolymph drains into a large venous sinus within the head and foot, which contains the nephridium, an excretory organ with a function similar to that of the vertebrate kidney. From here it passes into vessels within the gill, or into the capillary network of the pulmonate lung, before returning to the heart.
In some genera, such as the large marine snail Busycon, the main anterior artery (which supplies the head and foot) includes an enlarged muscular region. This structure is effectively a secondary heart, and probably helps to maintain blood pressure in the vessels of the head.
Haemolymph
Because of the open circulatory system of gastropods and other molluscs, there is no clear distinction between the blood and the lymph, or interstitial fluid. As a result, the circulatory fluid is commonly referred to as haemolymph, rather than blood.
The majority of gastropods have haemolymph containing the respiratory pigment haemocyanin. This is a copper-containing protein that helps to carry oxygen, and gives the haemolymph a pale blue colour. In the freshwater Planorbid snails, however, the haemocyanin is replaced by haemoglobin, and thus their haemolymph is red rather than blue. Some gastropods, such as the sea hare Aplysia, appear to lack respiratory pigments altogether.
Regardless of whether they employ haemocyanin or haemoglobin, the pigments are dissolved directly in the serum, with no equivalent of the red blood cells found in mammals. However, the haemolymph does contain amoebocytes, which may have a role in the immune system.
See also
Keyhole limpet hemocyanin
References
Circulatory system
Gastropod anatomy | Circulatory system of gastropods | [
"Biology"
] | 634 | [
"Organ systems",
"Circulatory system"
] |
23,784,692 | https://en.wikipedia.org/wiki/Amanita%20flavoconia | Amanita flavoconia, commonly known as yellow patches, yellow wart, orange amanita, yellow-dust amanita or the American yellow dust amanita, is a species of mushroom in the family Amanitaceae. It has an orangish-yellow cap with yellowish-orange patches or warts, a yellowish-orange annulus, and a white to orange stem. Common and widespread throughout eastern North America, A. flavoconia grows on the ground in broad-leaved and mixed forests, especially in mycorrhizal association with hemlock.
Taxonomy
Amanita flavoconia was first described by American naturalist George Francis Atkinson in 1902, based on a specimen he found in woods north of Fall Creek, Cayuga Lake Basin, New York. Jean-Edouard Gilbert placed it in Amplariella, in 1941, while in 1948 William Alphonso Murrill thought that it belonged best in Venenarius; both of these segregate genera have been folded back into Amanita.
The specific epithet flavoconia means yellowish and conical. Its common names include "yellow patches", "yellow wart", "orange Amanita", or "yellow-dust Amanita".
Description
The cap is initially ovoid in shape, but in maturity becomes convex and eventually flattened. Orange to bright yellow-orange in color, it reaches diameters of . Young specimens are covered with chrome yellow warts that may be easily rubbed off or washed away with rain.
The cap surface is smooth and sticky (viscid) beneath the warts; the edge of the cap is striate, reflecting the arrangement of the gills underneath. The flesh is white. The gills are barely free from the stem, and packed close together. They are white or tinged yellow on the edges, and initially covered with a yellowish partial veil. The stem is typically long by thick, equal or slightly tapered upward from a small rounded bulb at the base. Its color may range from white to yellowish orange, and the surface may be smooth, or covered with small flakes. The base of the stem usually has chrome yellow flakes of universal veil material adhering loosely to the bulb, or in the soil around the base. The partial veil leaves a skirt-like ring, (annulus) on the upper stem. The spore print of A. flavoconia is white.
Campbell and Petersen published a detailed description of the characteristics of A. flavoconia grown in culture. In the era prior to the commonplace use of DNA analysis and phylogenetics, cultural characters were often used to help provide additional taxonomic information; they found considerable variability between different isolates.
Two variants have been reported from Colombia, collected from Quercus humboldtii forests: A. flavoconia var. sinapicolor and var. inquinata.
Microscopic features
The spores are elliptical, smooth, and have dimensions of 7–9 by 5–8 μm. They are hyaline (translucent), and amyloid, meaning that they absorb the iodine stain in Melzer's reagent. The spore-bearing cells, the basidia, are up to 35–43 μm long by 4–12 μm, and each have four sterigmata, extensions that hold the spores. The outer layer, or cuticle of the cap (known technically as the pileipellis) is made of filamentous interwoven gelatinized hyphae, with diameters between 3 and 7 μm.
Similar species
This species has often been confused with A. muscaria, some subspecies of which are also orange-colored. It also bears some resemblance to A. frostiana and A. flavorubescens. One 1982 study concluded that a "large majority" of herbarium specimens labeled as A. frostiana were actually A. flavoconia. The use of microscopic features is necessary to distinguish clearly among the species: A. flavoconia has elliptic, amyloid spores, while A. frostiana has round, non-amyloid spores; A. muscaria has nonamyloid, elliptic spores. In the field, A. flavorubescens can usually be distinguished by its yellow cap color.
Distribution and habitat
A common mycorrhizal mushroom, A. flavoconia grows solitary or in groups on the ground in the summer to the fall, in broad-leaved and mixed woods. Noted for preferring hemlock, it is also associated with high elevation red spruce forests.
In North America, A. flavoconia has a wide distribution and has been collected from several locations, including Ontario, Canada; the United States (Iowa), and Mexico. It has been described as "of the most common and widespread species of Amanita in eastern North America."
Edibility
As the edibility of this species is unknown, it should not be consumed.
See also
List of Amanita species
References
External links
flavoconia
Fungi of North America
Fungi described in 1902
Fungus species | Amanita flavoconia | [
"Biology"
] | 1,042 | [
"Fungi",
"Fungus species"
] |
4,746,766 | https://en.wikipedia.org/wiki/Process | A process is a series or set of activities that interact to produce a result; it may occur once-only or be recurrent or periodic.
Things called a process include:
Business and management
Business process, activities that produce a specific service or product for customers
Business process modeling, activity of representing processes of an enterprise in order to deliver improvements
Manufacturing process management, a collection of technologies and methods used to define how products are to be manufactured.
Process architecture, structural design of processes, applies to fields such as computers, business processes, logistics, project management
Process area, related processes within an area which together satisfies an important goal for improvements within that area
Process costing, a cost allocation procedure of managerial accounting
Process management (project management), a systematic series of activities directed towards planning, monitoring the performance and causing an end result in engineering activities, business process, manufacturing processes or project management
Process-based management, is a management approach that views a business as a collection of processes
Law
Due process, the concept that governments must respect the rule of law
Legal process, the proceedings and records of a legal case
Service of process, the procedure of giving official notice of a legal proceeding
Science and technology
The general concept of the scientific process, see scientific method
Process theory, the scientific study of processes
Industrial processes, consists of the purposeful sequencing of tasks that combine resources to produce a desired output
Biology and psychology
Process (anatomy), a projection or outgrowth of tissue from a larger body
Biological process, a process of a living organism
Cognitive process, such as attention, memory, language use, reasoning, and problem solving
Mental process, a function or processes of the mind
Neuronal process, also neurite, a projection from the cell body of a neuron
Chemistry
Chemical process, a method or means of changing one or more chemicals or chemical compounds
Unit process, a step in manufacturing in which chemical reaction takes place
Computing
Process (computing), a computer program, or running a program concurrently with other programs
Child process, created by another process
Parent process
Process management (computing), an integral part of any modern-day operating system (OS)
Processing (programming language), an open-source language and integrated development environment
Mathematics
In probability theory:
Branching process, a Markov process that models a population
Diffusion process, a solution to a stochastic differential equation
Empirical process, a stochastic process that describes the proportion of objects in a system in a given state
Lévy process, a stochastic process with independent, stationary increments
Poisson process, a point process consisting of randomly located points on some underlying space
Predictable process, a stochastic process whose value is knowable
Stochastic process, a random process, as opposed to a deterministic process
Wiener process, a continuous-time stochastic process
Process calculus, a diverse family of related approaches for formally modeling concurrent systems
Process function, a mathematical concept used in thermodynamics
Thermodynamics
Process function, a mathematical concept used in thermodynamics
Thermodynamic process, the energetic evolution of a thermodynamic system
Adiabatic process, which proceeds without transfer of heat or matter between a system and its surroundings
Isenthalpic process, in which enthalpy stays constant
Isobaric process, in which the pressure stays constant
Isochoric process, in which volume stays constant
Isothermal process, in which temperature stays constant
Polytropic process, which obeys the equation
Quasistatic process, which occurs infinitely slowly, as an approximation
Other uses
The Process, a concept in the film 3%
Food processing, transformation of raw ingredients, by physical or chemical means into food
Language processing in the brain
Natural language processing
Praxis (process), in philosophy, the process by which a theory or skill is enacted or realized
Process (engineering), set of interrelated tasks that transform inputs into outputs
Process philosophy, which regards change as the cornerstone of reality
Process thinking, a philosophy that focuses on present circumstances
Writing process, a concept in writing and composition studies
Work in process, goods that are partially completed within a company, awaiting finalization for sale.
External links
Business process
Business process management
Process engineering
Industrial processes
Technology-related lists
Legal procedure
Biological processes
Chemical processes
Process (computing) | Process | [
"Chemistry",
"Engineering",
"Biology"
] | 858 | [
"Process engineering",
"Chemical processes",
"Mechanical engineering by discipline",
"nan",
"Chemical process engineering"
] |
4,746,880 | https://en.wikipedia.org/wiki/Process%20%28engineering%29 | In engineering, a process is a series of interrelated tasks that, together, transform inputs into a given output. These tasks may be carried out by people, nature or machines using various resources; an engineering process must be considered in the context of the agents carrying out the tasks and the resource attributes involved. Systems engineering normative documents and those related to Maturity Models are typically based on processes, for example, systems engineering processes of the EIA-632 and processes involved in the Capability Maturity Model Integration (CMMI) institutionalization and improvement approach. Constraints imposed on the tasks and resources required to implement them are essential for executing the tasks mentioned.
Semiconductor industry
Semiconductor process engineers face the unique challenge of transforming raw materials into high-tech devices. Common semiconductor devices include Integrated Circuits (ICs), Light-Emitting Diodes (LEDs), solar cells, and solid-state lasers. To produce these and other semiconductor devices, semiconductor process engineers rely heavily on interconnected physical and chemical processes.
A prominent example of these combined processes is the use of ultra-violet photolithography which is then followed by wet etching, the process of creating an IC pattern that is transferred onto an organic coating and etched onto the underlying semiconductor chip. Other examples include the ion implantation of dopant species to tailor the electrical properties of a semiconductor chip and the electrochemical deposition of metallic interconnects (e.g. electroplating). Process Engineers are generally involved in the development, scaling, and quality control of new semiconductor processes from lab bench to manufacturing floor.
Chemical engineering
A chemical process is a series of unit operations used to produce a material in large quantities.
In the chemical industry, chemical engineers will use the following to define or illustrate a process:
Process flow diagram (PFD)
Piping and instrumentation diagram (P&ID)
Simplified process description
Detailed process description
Project management
Process simulation
CPRET
The Association Française d'Ingénierie Système has developed a process definition dedicated to Systems engineering (SE), but open to all domains.
The CPRET representation integrates the process Mission and Environment in order to offer an external standpoint. Several models may correspond to a single definition depending on the language used (UML or another language).
Note: process definition and modeling are interdependent notions but different the one from the other.
Process
A process is a set of transformations of input elements into products: respecting constraints,
requiring resources,
meeting a defined mission, corresponding to a specific purpose adapted to a given environment.
Environment
Natural conditions and external factors impacting a process.
Mission
Purpose of the process tailored to a given environment.
This definition requires a process description to include the Constraints, Products, Resources, Input Elements and Transformations. This leads to the CPRET acronym to be used as name and mnemonic for this definition.
Constraints
Imposed conditions, rules or regulations.
Products
All whatever is generated by transformations. The products can be of the desired or not desired type (e.g., the software system and bugs, the defined products and waste).
Resources
Human resources, energy, time and other means required to carry out the transformations.
Elements as inputs
Elements submitted to transformations for producing the products.
Transformations
Operations organized according to a logic aimed at optimizing the attainment of specific products from the input elements, with the allocated resources and on compliance with the imposed constraints.
CPRET through examples
The purpose of the following examples is to illustrate the definitions with concrete cases. These examples come from the Engineering field but also from other fields to show that the CPRET definition of processes is not limited to the System Engineering context.
Examples of processes
An engineering (EIA-632, ISO/IEC 15288, etc.)
A concert
A polling campaign
A certification
Examples of environment
Various levels of maturity, technicality, equipment
An audience
A political system
Practices
Examples of mission
Supply better quality products
Satisfy the public, critics
Have candidates elected
Obtain the desired approval
Examples of constraints
Imposed technologies
Correct acoustics
Speaking times
A reference model (ISO, CMMI, etc.)
Examples of products
A mobile telephone network
A show
Vote results
A quality label
Examples of resources
Development teams
An orchestra and its instruments
An organization
An assessment team
Examples of elements as inputs
Specifications
Scores
Candidates
A company and its practices
Examples of transformations
Define an architecture
Play the scores
Make people vote for a candidate
Audit the organization
Conclusions
The CPRET formalized definition systematically addresses the input Elements, Transformations, and Products but also the other essential components of a Process, namely the Constraints and Resources. Among the resources, note the specificity of the Resource-Time component which passes inexorably and irreversibly, with problems of synchronization and sequencing.
This definition states that environment is an external factor which cannot be avoided: as a matter of fact, a process is always interdependent with other phenomena including other processes.
References
Bibliography
Process engineering | Process (engineering) | [
"Engineering"
] | 990 | [
"Process engineering",
"Mechanical engineering by discipline"
] |
4,749,757 | https://en.wikipedia.org/wiki/Geminal | In chemistry, the descriptor geminal () refers to the relationship between two atoms or functional groups that are attached to the same atom. A geminal diol, for example, is a diol (a molecule that has two alcohol functional groups) attached to the same carbon atom, as in methanediol. Also the shortened prefix gem may be applied to a chemical name to denote this relationship, as in a gem-dibromide for "geminal dibromide".
The concept is important in many branches of chemistry, including synthesis and spectroscopy, because functional groups attached to the same atom often behave differently from when they are separated. Geminal diols, for example, are easily converted to ketones or aldehydes with loss of water.
The related term vicinal refers to the relationship between two functional groups that are attached to adjacent atoms. This relative arrangement of two functional groups can also be described by the descriptors α and β.
1H NMR spectroscopy
In 1H NMR spectroscopy, the coupling of two hydrogen atoms on the same carbon atom is called a geminal coupling. It occurs only when two hydrogen atoms on a methylene group differ stereochemically from each other. The geminal coupling constant is referred to as 2J since the hydrogen atoms couple through two bonds. Depending on the other substituents, the geminal coupling constant takes values between −23 and +42 Hz.
Synthesis
The following example shows the conversion of a cyclohexyl methyl ketone to a gem-dichloride through a reaction with phosphorus pentachloride. This gem-dichloride can then be used to synthesize an alkyne.
References
Chemical nomenclature | Geminal | [
"Chemistry"
] | 350 | [
"nan"
] |
4,750,655 | https://en.wikipedia.org/wiki/Grainger%20challenge | The Grainger challenge is a scientific competition to find an economical way to remove arsenic from arsenic-contaminated groundwater. This competition is being funded by the United States National Academy of Engineering and the Grainger Foundation and is meant to help provide safe drinking water to countries such as Bangladesh, India, and Cambodia.
In 2007, the winner of the Gold Award ($1,000,000) was Abul Hussam, for his invention of the Sono arsenic filter. The Silver Award ($200,000) was awarded to Arup K Sengupta for his invention and implementation of ArsenXnp hybrid anion exchange (HAIX) resin. The Children's Safe Drinking Water Program at Procter & Gamble (P&G), Cincinnati, received the Bronze Award of US$100,000 for the PUR™ Purifier of Water coagulation and flocculation water treatment system.
References
External links
Grainger challenge page at the National Academy of Engineering
Grainger Foundation
Water treatment
Arsenic | Grainger challenge | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 207 | [
"Water treatment",
"Water pollution",
"Water technology",
"Environmental engineering"
] |
4,751,771 | https://en.wikipedia.org/wiki/Spherical%20multipole%20moments | In physics, spherical multipole moments are the coefficients in a series expansion of a potential that varies inversely with the distance to a source, i.e., as Examples of such potentials are the electric potential, the magnetic potential and the gravitational potential.
For clarity, we illustrate the expansion for a point charge, then generalize to an arbitrary charge density Through this article, the primed coordinates such as refer to the position of charge(s), whereas the unprimed coordinates such as refer to the point at which the potential is being observed. We also use spherical coordinates throughout, e.g., the vector has coordinates where is the radius, is the colatitude and is the azimuthal angle.
Spherical multipole moments of a point charge
The electric potential due to a point charge located at is given by
where is the distance between the charge position and the observation point and is the angle between the vectors and . If the radius of the observation point is greater than the radius of the charge, we may factor out 1/r and expand the square root in powers of using Legendre polynomials
This is exactly analogous to the axial multipole expansion.
We may express in terms of the coordinates of the observation point and charge position using the spherical law of cosines (Fig. 2)
Substituting this equation for into the Legendre polynomials and factoring the primed and unprimed coordinates yields the important formula known as the spherical harmonic addition theorem
where the functions are the spherical harmonics. Substitution of this formula into the potential yields
which can be written as
where the multipole moments are defined
As with axial multipole moments, we may also consider the case when the radius of the observation point is less than the radius of the charge. In that case, we may write
which can be written as
where the interior spherical multipole moments are defined as the complex conjugate of irregular solid harmonics
The two cases can be subsumed in a single expression if and are defined to be the lesser and greater, respectively, of the two radii and ; the potential of a point charge then takes the form, which is sometimes referred to as Laplace expansion
Exterior spherical multipole moments
It is straightforward to generalize these formulae by replacing the point charge with an infinitesimal charge element and integrating. The functional form of the expansion is the same. In the exterior case, where , the result is:
where the general multipole moments are defined
Note
The potential Φ(r) is real, so that the complex conjugate of the expansion is equally valid. Taking of the complex conjugate leads to a definition of the multipole moment which is proportional to Yℓm, not to its complex conjugate. This is a common convention, see molecular multipoles for more on this.
Interior spherical multipole moments
Similarly, the interior multipole expansion has the same functional form. In the interior case, where , the result is:
with the interior multipole moments defined as
Interaction energies of spherical multipoles
A simple formula for the interaction energy of two non-overlapping but concentric charge distributions can be derived. Let the first charge distribution be centered on the origin and lie entirely within the second charge distribution . The interaction energy between any two static charge distributions is defined by
The potential of the first (central) charge distribution may be expanded in exterior multipoles
where represents the exterior multipole moment of the first charge distribution. Substitution of this expansion yields the formula
Since the integral equals the complex conjugate of the interior multipole moments of the second (peripheral) charge distribution, the energy formula reduces to the simple form
For example, this formula may be used to determine the electrostatic interaction energies of the atomic nucleus with its surrounding electronic orbitals. Conversely, given the interaction energies and the interior multipole moments of the electronic orbitals, one may find the exterior multipole moments (and, hence, shape) of the atomic nucleus.
Special case of axial symmetry
The spherical multipole expansion takes a simple form if the charge distribution is axially symmetric (i.e., is independent of the azimuthal angle ). By carrying out the integrations that define and , it can be shown the multipole moments are all zero except when . Using the mathematical identity
the exterior multipole expansion becomes
where the axially symmetric multipole moments are defined
In the limit that the charge is confined to the -axis, we recover the exterior axial multipole moments.
Similarly the interior multipole expansion becomes
where the axially symmetric interior multipole moments are defined
In the limit that the charge is confined to the -axis, we recover the interior axial multipole moments.
See also
Solid harmonics
Laplace expansion
Multipole expansion
Legendre polynomials
Axial multipole moments
Cylindrical multipole moments
References
Electromagnetism
Potential theory
Moment (physics) | Spherical multipole moments | [
"Physics",
"Mathematics"
] | 981 | [
"Electromagnetism",
"Physical phenomena",
"Functions and mappings",
"Physical quantities",
"Quantity",
"Mathematical objects",
"Potential theory",
"Mathematical relations",
"Fundamental interactions",
"Moment (physics)"
] |
4,752,147 | https://en.wikipedia.org/wiki/Isenthalpic%20process | An isenthalpic process or isoenthalpic process is a process that proceeds without any change in enthalpy, H; or specific enthalpy, h.
Overview
If a steady-state, steady-flow process is analysed using a control volume, everything outside the control volume is considered to be the surroundings.
Such a process will be isenthalpic if there is no transfer of heat to or from the surroundings, no work done on or by the surroundings, and no change in the kinetic energy of the fluid. This is a sufficient but not necessary condition for isoenthalpy. The necessary condition for a process to be isoenthalpic is that the sum of each of the terms of the energy balance other than enthalpy (work, heat, changes in kinetic energy, etc.) cancel each other, so that the enthalpy remains unchanged. For a process in which magnetic and electric effects (among others) give negligible contributions, the associated energy balance can be written as
If then it must be that
The throttling process is a good example of an isoenthalpic process in which significant changes in pressure and temperature can occur to the fluid, and yet the net sum the associated terms in the energy balance is null, thus rendering the transformation isoenthalpic. The lifting of a relief (or safety) valve on a pressure vessel is an example of throttling process. The specific enthalpy of the fluid inside the pressure vessel is the same as the specific enthalpy of the fluid as it escapes through the valve. With a knowledge of the specific enthalpy of the fluid and the pressure outside the pressure vessel, it is possible to determine the temperature and speed of the escaping fluid.
In an isenthalpic process:
,
.
Isenthalpic processes on an ideal gas follow isotherms, since .
See also
Adiabatic process
Joule–Thomson effect
Ideal gas laws
Isentropic process
References
Biblography
G. J. Van Wylen and R. E. Sonntag (1985), Fundamentals of Classical Thermodynamics, John Wiley & Sons, Inc., New York
Notes
Thermodynamic processes
Enthalpy | Isenthalpic process | [
"Physics",
"Chemistry",
"Mathematics"
] | 455 | [
"Thermodynamics stubs",
"Thermodynamic properties",
"Physical quantities",
"Thermodynamic processes",
"Quantity",
"Enthalpy",
"Thermodynamics",
"Physical chemistry stubs"
] |
19,609,885 | https://en.wikipedia.org/wiki/Friedel%27s%20salt | Friedel's salt is an anion exchanger mineral belonging to the family of the layered double hydroxides (LDHs). It has affinity for anions as chloride and iodide and is capable of retaining them to a certain extent in its crystallographical structure.
Composition
Friedel's salt is a layered double hydroxide (LDH) of general formula:
or more explicitly for a positively-charged LDH mineral:
or by directly incorporating water molecules into the Ca,Al hydroxide layer:
where chloride and hydroxide anions occupy the interlayer to compensate the excess of positive charges.
In the cement chemist notation (CCN), considering that
and doubling all the stoichiometry, it could also be written in CCN as follows:
A simplified chemical composition with only Cl– in the interlayer, and without OH–, as:
can be also written in cement chemist notation as:
Friedel's salt is formed in cements initially rich in tri-calcium aluminate (C3A). Free-chloride ions directly bind with the AFm hydrates (C4AH13 and its derivatives) to form Friedel's salt.
Importance of chloride binding in AFm phases
Friedel's salt plays a main role in the binding and retention of chloride anions in cement and concrete. However, Friedel's salt remains a poorly understood phase in the CaO–Al2O3–CaCl2–H2O system. A sufficient understanding of the Friedel's salt system is essential to correctly model the reactive transport of chloride ions in reinforced concrete structures affected by chloride attack and steel reinforcement corrosion. It is also important to assess the long-term stability of salt-saturated Portland cement-based grouts to be used in engineering structures exposed to seawater or concentrated brine as it is the case for radioactive waste disposal in deep salt formations.
Another reason to study AFm phases and the Friedel's salt system is their tendency to bind, trap and to immobilise toxic anions, such as , , and , or the long-lived radionuclide 129I−, in cementitious materials. Their characterization is important to conceive anion getters and to assess the retention capacity of cementitious buffer and concrete barriers used for radioactive waste disposal.
Chloride sorption and anion exchange in AFm phases
Friedel's salt could be first tentatively represented as an AFm phase in which two chloride ions would have simply replaced one sulfate ion. This conceptual representation based on the intuition of a simple stoichiometric exchange is very convenient to remind but such a simple mechanism likely does not directly occur and must be considered with caution:
Indeed, the reality appears to be more complex than such a simple stoichiometric exchange between chloride and sulfate ions in the AFm crystal structure. In fact, it seems that chloride ions are electrostatically sorbed onto the positively charged [Ca2Al(OH)6 · 2H2O]+ layer of AFm hydrate, or could also exchange with hydroxide ions (OH–) also present in the interlayer. So, the simple and "apparent" exchange reaction first presented here above for the sake of ease does not correspond to the reality and is an oversimplified representation.
Similarly, Kuzel’s salt could seem to be formed when only 1 Cl– ion exchanges with in AFm (half substitution of sulfate ions):
Glasser et al. (1999) proposed to name this half-substituted salt in honor of his discoverer: Hans-Jürgen Kuzel.
However, Mesbah et al. (2011) have identified two different types of interlayers in the crystallographic structure they have determined and it precludes the common anion exchange reaction presented here above as stated by the authors themselves in their conclusions:
Kuzel's salt is a two-stage layered compound with two distinct interlayers, which are alternatively filled by chloride anions only (for one kind of interlayer) and by sulfate anions and water molecules (for the other kind of interlayer). Kuzel's salt structure is composed of the perfect intercalation of the Friedel's salt structure and the monosulfoaluminate structure (the two end-members of the studied bi-anionic AFm compound). The structural properties of Kuzel's salt explain the absence of extended chloride to sulfate or sulfate to chloride substitution.
The staging feature of Kuzel's salt certainly explains the difficulties to substitute chloride and sulfate: the modification in one kind of interlayer involves a modification in the other kind of interlayer in order to preserve the electroneutrality of the compound. The two-stage feature of Kuzel's salt implies that each interlayer should be mono-anionic.
So, if the global chemical composition of Friedel's salt and Kuzel's salt corresponds well respectively with the stoichiometry of a complete substitution, or a half substitution, of sulfate ions by chloride ions in the crystal structure of AFm, it does not tell directly anything on the exact mechanism of anion substitution in this complicated system. Only detailed and well controlled chloride sorption, or anion exchange, experiments with a complete analysis of all the dissolved species present in aqueous solution (also including OH–, Na+ and Ca2+ ions) can decipher the system.
Discovery
Friedel's salt discovery is relatively difficult to trace back from the recent literature, simply because it is an ancient finding of a poorly known and non-natural product. It has been synthesised and identified in 1897 by Georges Friedel, mineralogist and crystallographer, son of the famous French chemist Charles Friedel. Georges Friedel also synthesised calcium aluminate (1903) in the framework of his work on the macles theory (twin crystals). This point requires further verification.
Formation
Relation with Tricalcium aluminate.
Incorporation of chloride.
Solid solutions.
See also
AFm phases
Aluminium chlorohydrate
Cement
Sorel cement, a mixture of general formula: Mg4Cl2(OH)6
Stanislas Sorel, a French engineer who made a new form of cement from a combination of magnesium oxide and magnesium chloride
Concrete
Salt-concrete, also known as salzbeton
Chloride
Layered double hydroxides
Tricalcium aluminate
Friedel-Crafts reaction
Friedel family, a rich lineage of French scientists:
Charles Friedel (1832–1899), French chemist known for the Friedel-Crafts reaction
Georges Friedel (1865–1933), here above mentioned, French crystallographer and mineralogist; son of Charles
Edmond Friedel (1895-1972) (1895–1972), French Polytechnician and mining engineer, founder of BRGM, the French geological survey; son of Georges
Jacques Friedel, (1921-2014), French physicist; son of Edmond, see the French site for Jacques Friedel
References
Further reading
External links
Friedel's salt, Ca2Al(OH)6 (Cl, OH) · 2H2O: Its solid solutions and their role in chloride binding
Glossary of cement concrete and SEM terminology
Aluminates
Calcium compounds
Cement
Concrete
Crystallography
Hydrates
Hydroxides
Materials | Friedel's salt | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,499 | [
"Structural engineering",
"Hydroxides",
"Hydrates",
"Materials science",
"Crystallography",
"Materials",
"Condensed matter physics",
"Concrete",
"Bases (chemistry)",
"Matter"
] |
19,618,101 | https://en.wikipedia.org/wiki/K-Poincar%C3%A9%20group | In physics and mathematics, the κ-Poincaré group, named after Henri Poincaré, is a quantum group, obtained by deformation of the Poincaré group into a Hopf algebra.
It is generated by the elements and with the usual constraint:
where is the Minkowskian metric:
The commutation rules reads:
In the (1 + 1)-dimensional case the commutation rules between and are particularly simple. The Lorentz generator in this case is:
and the commutation rules reads:
The coproducts are classical, and encode the group composition law:
Also the antipodes and the counits are classical, and represent the group inversion law and the map to the identity:
The κ-Poincaré group is the dual Hopf algebra to the K-Poincaré algebra, and can be interpreted as its “finite” version.
References
Hopf algebras
Mathematical physics | K-Poincaré group | [
"Physics",
"Mathematics"
] | 189 | [
"Algebra stubs",
"Applied mathematics",
"Theoretical physics",
"Mathematical physics",
"Algebra"
] |
19,620,471 | https://en.wikipedia.org/wiki/Airy%20wave%20theory | In fluid dynamics, Airy wave theory (often referred to as linear wave theory) gives a linearised description of the propagation of gravity waves on the surface of a homogeneous fluid layer. The theory assumes that the fluid layer has a uniform mean depth, and that the fluid flow is inviscid, incompressible and irrotational. This theory was first published, in correct form, by George Biddell Airy in the 19th century.
Airy wave theory is often applied in ocean engineering and coastal engineering for the modelling of random sea states – giving a description of the wave kinematics and dynamics of high-enough accuracy for many purposes. Further, several second-order nonlinear properties of surface gravity waves, and their propagation, can be estimated from its results. Airy wave theory is also a good approximation for tsunami waves in the ocean, before they steepen near the coast.
This linear theory is often used to get a quick and rough estimate of wave characteristics and their effects. This approximation is accurate for small ratios of the wave height to water depth (for waves in shallow water), and wave height to wavelength (for waves in deep water).
Description
Airy wave theory uses a potential flow (or velocity potential) approach to describe the motion of gravity waves on a fluid surface. The use of (inviscid and irrotational) potential flow in water waves is remarkably successful, given its failure to describe many other fluid flows where it is often essential to take viscosity, vorticity, turbulence or flow separation into account. This is due to the fact that for the oscillatory part of the fluid motion, wave-induced vorticity is restricted to some thin oscillatory Stokes boundary layers at the boundaries of the fluid domain.
Airy wave theory is often used in ocean engineering and coastal engineering. Especially for random waves, sometimes called wave turbulence, the evolution of the wave statistics – including the wave spectrum – is predicted well over not too long distances (in terms of wavelengths) and in not too shallow water. Diffraction is one of the wave effects which can be described with Airy wave theory. Further, by using the WKBJ approximation, wave shoaling and refraction can be predicted.
Earlier attempts to describe surface gravity waves using potential flow were made by, among others, Laplace, Poisson, Cauchy and Kelland. But Airy was the first to publish the correct derivation and formulation in 1841. Soon after, in 1847, the linear theory of Airy was extended by Stokes for non-linear wave motion – known as Stokes' wave theory – correct up to third order in the wave steepness. Even before Airy's linear theory, Gerstner derived a nonlinear trochoidal wave theory in 1802, which however is not irrotational.
Airy wave theory is a linear theory for the propagation of waves on the surface of a potential flow and above a horizontal bottom. The free surface elevation of one wave component is sinusoidal, as a function of horizontal position and time :
where
is the wave amplitude in metres,
is the cosine function,
is the angular wavenumber in radians per metre, related to the wavelength by ,
is the angular frequency in radians per second, related to the period and frequency by .
The waves propagate along the water surface with the phase speed :
The angular wavenumber and frequency are not independent parameters (and thus also wavelength and period are not independent), but are coupled. Surface gravity waves on a fluid are dispersive waves – exhibiting frequency dispersion – meaning that each wavenumber has its own frequency and phase speed.
Note that in engineering the wave height – the difference in elevation between crest and trough – is often used:
valid in the present case of linear periodic waves.
Underneath the surface, there is a fluid motion associated with the free surface motion. While the surface elevation shows a propagating wave, the fluid particles are in an orbital motion. Within the framework of Airy wave theory, the orbits are closed curves: circles in deep water and ellipses in finite depth—with the circles dying out before reaching the bottom of the fluid layer, and the ellipses becoming flatter near the bottom of the fluid layer. So while the wave propagates, the fluid particles just orbit (oscillate) around their average position. With the propagating wave motion, the fluid particles transfer energy in the wave propagation direction, without having a mean velocity. The diameter of the orbits reduces with depth below the free surface. In deep water, the orbit's diameter is reduced to 4% of its free-surface value at a depth of half a wavelength.
In a similar fashion, there is also a pressure oscillation underneath the free surface, with wave-induced pressure oscillations reducing with depth below the free surface – in the same way as for the orbital motion of fluid parcels.
Mathematical formulation of the wave motion
Flow problem formulation
The waves propagate in the horizontal direction, with coordinate , and a fluid domain bound above by a free surface at , with the vertical coordinate (positive in the upward direction) and being time. The level corresponds with the mean surface elevation. The impermeable bed underneath the fluid layer is at . Further, the flow is assumed to be incompressible and irrotational – a good approximation of the flow in the fluid interior for waves on a liquid surface – and potential theory can be used to describe the flow. The velocity potential is related to the flow velocity components and in the horizontal () and vertical () directions by:
Then, due to the continuity equation for an incompressible flow, the potential has to satisfy the Laplace equation:
Boundary conditions are needed at the bed and the free surface in order to close the system of equations. For their formulation within the framework of linear theory, it is necessary to specify what the base state (or zeroth-order solution) of the flow is. Here, we assume the base state is rest, implying the mean flow velocities are zero.
The bed being impermeable, leads to the kinematic bed boundary-condition:
In case of deep water – by which is meant infinite water depth, from a mathematical point of view – the flow velocities have to go to zero in the limit as the vertical coordinate goes to minus infinity: .
At the free surface, for infinitesimal waves, the vertical motion of the flow has to be equal to the vertical velocity of the free surface. This leads to the kinematic free-surface boundary-condition:
If the free surface elevation was a known function, this would be enough to solve the flow problem. However, the surface elevation is an extra unknown, for which an additional boundary condition is needed. This is provided by Bernoulli's equation for an unsteady potential flow. The pressure above the free surface is assumed to be constant. This constant pressure is taken equal to zero, without loss of generality, since the level of such a constant pressure does not alter the flow. After linearisation, this gives the dynamic free-surface boundary condition:
Because this is a linear theory, in both free-surface boundary conditions – the kinematic and the dynamic one, equations () and () – the value of and at the fixed mean level is used.
Solution for a progressive monochromatic wave
For a propagating wave of a single frequency – a monochromatic wave – the surface elevation is of the form:
The associated velocity potential, satisfying the Laplace equation (1) in the fluid interior, as well as the kinematic boundary conditions at the free surface (2), and bed (3), is:
with and the hyperbolic sine and hyperbolic cosine function, respectively. But and also have to satisfy the dynamic boundary condition, which results in non-trivial (non-zero) values for the wave amplitude only if the linear dispersion relation is satisfied:
with the hyperbolic tangent. So angular frequency and wavenumber – or equivalently period and wavelength – cannot be chosen independently, but are related. This means that wave propagation at a fluid surface is an eigenproblem. When and satisfy the dispersion relation, the wave amplitude can be chosen freely (but small enough for Airy wave theory to be a valid approximation).
Table of wave quantities
In the table below, several flow quantities and parameters according to Airy wave theory are given. The given quantities are for a bit more general situation as for the solution given above. Firstly, the waves may propagate in an arbitrary horizontal direction in the plane. The wavenumber vector is , and is perpendicular to the cams of the wave crests. Secondly, allowance is made for a mean flow velocity , in the horizontal direction and uniform over (independent of) depth . This introduces a Doppler shift in the dispersion relations. At an Earth-fixed location, the observed angular frequency (or absolute angular frequency) is . On the other hand, in a frame of reference moving with the mean velocity (so the mean velocity as observed from this reference frame is zero), the angular frequency is different. It is called the intrinsic angular frequency (or relative angular frequency), denoted . So in pure wave motion, with , both frequencies and are equal. The wave number (and wavelength ) are independent of the frame of reference, and have no Doppler shift (for monochromatic waves).
The table only gives the oscillatory parts of flow quantities – velocities, particle excursions and pressure – and not their mean value or drift.
The oscillatory particle excursions and are the time integrals of the oscillatory flow velocities and respectively.
Water depth is classified into three regimes:
deep water – for a water depth larger than half the wavelength, , the phase speed of the waves is hardly influenced by depth (this is the case for most wind waves on the sea and ocean surface),
shallow water – for a water depth smaller than 5% of the wavelength, , the phase speed of the waves is only dependent on water depth, and no longer a function of period or wavelength; and
intermediate depth – all other cases, , where both water depth and period (or wavelength) have a significant influence on the solution of Airy wave theory.
In the limiting cases of deep and shallow water, simplifying approximations to the solution can be made. While for intermediate depth, the full formulations have to be used.
Surface tension effects
Due to surface tension, the dispersion relation changes to:
with the surface tension in newtons per metre. All above equations for linear waves remain the same, if the gravitational acceleration is replaced by
As a result of surface tension, the waves propagate faster. Surface tension only has influence for short waves, with wavelengths less than a few decimeters in case of a water–air interface. For very short wavelengths – 2 mm or less, in case of the interface between air and water – gravity effects are negligible. Note that surface tension can be altered by surfactants.
The group velocity of capillary waves – dominated by surface tension effects – is greater than the phase velocity . This is opposite to the situation of surface gravity waves (with surface tension negligible compared to the effects of gravity) where the phase velocity exceeds the group velocity.
Interfacial waves
Surface waves are a special case of interfacial waves, on the interface between two fluids of different density.
Two layers of infinite depth
Consider two fluids separated by an interface, and without further boundaries. Then their dispersion relation is given through
where and are the densities of the two fluids, below () and above () the interface, respectively. Further γ is the surface tension on the interface.
For interfacial waves to exist, the lower layer has to be heavier than the upper one, . Otherwise, the interface is unstable and a Rayleigh–Taylor instability develops.
Two layers between horizontal rigid planes
For two homogeneous layers of fluids, of mean thickness below the interface and above – under the action of gravity and bounded above and below by horizontal rigid walls – the dispersion relationship for gravity waves is provided by:
where again and are the densities below and above the interface, while is the hyperbolic cotangent function. For the case is zero this reduces to the dispersion relation of surface gravity waves on water of finite depth .
Two layers bounded above by a free surface
In this case the dispersion relation allows for two modes: a barotropic mode where the free surface amplitude is large compared with the amplitude of the interfacial wave, and a baroclinic mode where the opposite is the case – the interfacial wave is higher than and in antiphase with the free surface wave. The dispersion relation for this case is of a more complicated form.
Second-order wave properties
Several second-order wave properties, ones that are quadratic in the wave amplitude , can be derived directly from Airy wave theory. They are of importance in many practical applications, such as forecasts of wave conditions. Using a WKBJ approximation, second-order wave properties also find their applications in describing waves in case of slowly varying bathymetry, and mean-flow variations of currents and surface elevation. As well as in the description of the wave and mean-flow interactions due to time and space-variations in amplitude, frequency, wavelength and direction of the wave field itself.
Table of second-order wave properties
In the table below, several second-order wave properties – as well as the dynamical equations they satisfy in case of slowly varying conditions in space and time – are given. More details on these can be found below. The table gives results for wave propagation in one horizontal spatial dimension. Further on in this section, more detailed descriptions and results are given for the general case of propagation in two-dimensional horizontal space.
The last four equations describe the evolution of slowly varying wave trains over bathymetry in interaction with the mean flow, and can be derived from a variational principle: Whitham's averaged Lagrangian method. In the mean horizontal-momentum equation, is the still water depth, that is, the bed underneath the fluid layer is located at . Note that the mean-flow velocity in the mass and momentum equations is the mass transport velocity , including the splash-zone effects of the waves on horizontal mass transport, and not the mean Eulerian velocity (for example, as measured with a fixed flow meter).
Wave energy density
Wave energy is a quantity of primary interest, since it is a primary quantity that is transported with the wave trains. As can be seen above, many wave quantities like surface elevation and orbital velocity are oscillatory in nature with zero mean (within the framework of linear theory). In water waves, the most used energy measure is the mean wave energy density per unit horizontal area. It is the sum of the kinetic and potential energy density, integrated over the depth of the fluid layer and averaged over the wave phase. Simplest to derive is the mean potential energy density per unit horizontal area of the surface gravity waves, which is the deviation of the potential energy due to the presence of the waves:
The overbar denotes the mean value (which in the present case of periodic waves can be taken either as a time average or an average over one wavelength in space).
The mean kinetic energy density per unit horizontal area of the wave motion is similarly found to be:
with the intrinsic frequency, see the table of wave quantities. Using the dispersion relation, the result for surface gravity waves is:
As can be seen, the mean kinetic and potential energy densities are equal. This is a general property of energy densities of progressive linear waves in a conservative system. Adding potential and kinetic contributions, and , the mean energy density per unit horizontal area of the wave motion is:
In case of surface tension effects not being negligible, their contribution also adds to the potential and kinetic energy densities, giving
so
with the surface tension.
Wave action, wave energy flux and radiation stress
In general, there can be an energy transfer between the wave motion and the mean fluid motion. This means, that the wave energy density is not in all cases a conserved quantity (neglecting dissipative effects), but the total energy density – the sum of the energy density per unit area of the wave motion and the mean flow motion – is. However, there is for slowly varying wave trains, propagating in slowly varying bathymetry and mean-flow fields, a similar and conserved wave quantity, the wave action :
with the action flux and the group velocity vector. Action conservation forms the basis for many wind wave models and wave turbulence models. It is also the basis of coastal engineering models for the computation of wave shoaling. Expanding the above wave action conservation equation leads to the following evolution equation for the wave energy density:
with:
is the mean wave energy density flux,
is the radiation stress tensor and
is the mean-velocity shear rate tensor.
In this equation in non-conservation form, the Frobenius inner product is the source term describing the energy exchange of the wave motion with the mean flow. Only in the case that the mean shear-rate is zero, , the mean wave energy density is conserved. The two tensors and are in a Cartesian coordinate system of the form:
with and the components of the wavenumber vector and similarly and the components in of the mean velocity vector .
Wave mass flux and wave momentum
The mean horizontal momentum per unit area induced by the wave motion – and also the wave-induced mass flux or mass transport – is:
which is an exact result for periodic progressive water waves, also valid for nonlinear waves. However, its validity strongly depends on the way how wave momentum and mass flux are defined. Stokes already identified two possible definitions of phase velocity for periodic nonlinear waves:
Stokes first definition of wave celerity (S1) – with the mean Eulerian flow velocity equal to zero for all elevations ' below the wave troughs, and
Stokes second definition of wave celerity (S2) – with the mean mass transport equal to zero.
The above relation between wave momentum and wave energy density is valid within the framework of Stokes' first definition.
However, for waves perpendicular to a coast line or in closed laboratory wave channel, the second definition (S2) is more appropriate. These wave systems have zero mass flux and momentum when using the second definition. In contrast, according to Stokes' first definition (S1), there is a wave-induced mass flux in the wave propagation direction, which has to be balanced by a mean flow in the opposite direction – called the undertow.
So in general, there are quite some subtleties involved. Therefore also the term pseudo-momentum of the waves is used instead of wave momentum.
Mass and momentum evolution equations
For slowly varying bathymetry, wave and mean-flow fields, the evolution of the mean flow can de described in terms of the mean mass-transport velocity defined as:
Note that for deep water, when the mean depth goes to infinity, the mean Eulerian velocity and mean transport velocity become equal.
The equation for mass conservation is:
where is the mean water depth, slowly varying in space and time.
Similarly, the mean horizontal momentum evolves as:
with the still-water depth (the sea bed is at ), is the wave radiation-stress tensor, is the identity matrix and is the dyadic product:
Note that mean horizontal momentum is only conserved if the sea bed is horizontal (the still-water depth is a constant), in agreement with Noether's theorem.
The system of equations is closed through the description of the waves. Wave energy propagation is described through the wave-action conservation equation (without dissipation and nonlinear wave interactions):
The wave kinematics are described through the wave-crest conservation equation:
with the angular frequency a function of the (angular) wavenumber , related through the dispersion relation. For this to be possible, the wave field must be coherent. By taking the curl of the wave-crest conservation, it can be seen that an initially irrotational wavenumber field stays irrotational.
Stokes drift
When following a single particle in pure wave motion (), according to linear Airy wave theory, a first approximation gives closed elliptical orbits for water particles. However, for nonlinear waves, particles exhibit a Stokes drift for which a second-order expression can be derived from the results of Airy wave theory (see the table above on second-order wave properties). The Stokes drift velocity , which is the particle drift after one wave cycle divided by the period, can be estimated using the results of linear theory:
so it varies as a function of elevation. The given formula is for Stokes first definition of wave celerity. When is integrated over depth, the expression for the mean wave momentum is recovered.
See also
Boussinesq approximation (water waves) – nonlinear theory for waves in shallow water.
Capillary wave – surface waves under the action of surface tension
Cnoidal wave – nonlinear periodic waves in shallow water, solutions of the Korteweg–de Vries equation
Mild-slope equation – refraction and diffraction of surface waves over varying depth
Ocean surface wave – real water waves as seen in the ocean and sea
Stokes wave – nonlinear periodic waves in non-shallow water
Wave power – using ocean and sea waves for power generation.
Notes
References
Historical
Also: "Trigonometry, On the Figure of the Earth, Tides and Waves", 396 pp.
Reprinted in:
Further reading
Two parts, 967 pages.
Originally published in 1879, the 6th extended edition appeared first in 1932.
504 pp.
External links
Linear theory of ocean surface waves on WikiWaves.
Water waves at MIT.
Water waves
Wave mechanics | Airy wave theory | [
"Physics",
"Chemistry"
] | 4,507 | [
"Physical phenomena",
"Water waves",
"Classical mechanics",
"Waves",
"Wave mechanics",
"Fluid dynamics"
] |
20,800,027 | https://en.wikipedia.org/wiki/Slip%20line%20field | In materials science and soil mechanics, a slip line field or slip line field theory is a technique often used to analyze the stresses and forces involved in the major deformation of metals or soils. In essence, in some problems including plane strain and plane stress elastic-plastic problems, elastic part of the material prevent unrestrained plastic flow but in many metal-forming processes, such as rolling, drawing, gorging, etc., large unrestricted plastic flows occur except for many small elastic zones. In effect we are concerned with a rigid-plastic material under condition of plane strain. it turns out that the simplest way of solving stress equations is to express them in terms of a coordinate system that is along potential slip (or failure) surfaces. It is for this reason that this type of analysis is termed slip line analysis or the theory of slip line fields in the literature.
History
The slip-line theory was co-developed by Hilda Geiringer in the early 1930s. She developed the Geiringer equations, which simplify the process of calculating the deformation.
References
Solid mechanics
Deformation (mechanics)
Metallurgy | Slip line field | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 223 | [
"Solid mechanics",
"Deformation (mechanics)",
"Metallurgy",
"Materials science",
"Plasticity (physics)",
"Mechanics",
"nan",
"Mechanical engineering",
"Mechanical engineering stubs"
] |
20,809,698 | https://en.wikipedia.org/wiki/Rotational%20Brownian%20motion%20%28astronomy%29 | In astronomy, rotational Brownian motion is the random walk in orientation of a binary star's orbital plane, induced by gravitational perturbations from passing stars.
Theory
Consider a binary that consists of two massive objects (stars, black holes etc.) and that is embedded in a stellar system containing a large number of stars. Let and be the masses of the two components of the binary whose total mass is . A field star that approaches the binary with impact parameter and velocity passes a distance from the binary, where
the latter expression is valid in the limit that gravitational focusing dominates the encounter rate. The rate of encounters with stars that interact strongly with the binary, i.e. that satisfy , is approximately where and are the number density and velocity dispersion of the field stars and is the semi-major axis of the binary.
As it passes near the binary, the field star experiences a change in velocity of order
,
where is the relative velocity of the two stars in the binary.
The change in the field star's specific angular momentum with respect to the binary, , is then Δl ≈ a Vbin. Conservation of angular momentum implies that the binary's angular momentum changes by Δlbin ≈ -(m/μ12)Δl where m is the mass of a field star and μ12 is the binary reduced mass. Changes in the magnitude of lbin correspond to changes in the binary's orbital eccentricity via the relation e = 1 - lb2/GM12μ12a. Changes in the direction of lbin correspond to changes in the orientation of the binary, leading to rotational diffusion. The rotational diffusion coefficient is
where ρ = mn is the mass density of field stars.
Let F(θ,t) be the probability that the rotation axis of the binary is oriented at angle θ at time t. The evolution equation for F is
If <Δξ2>, a, ρ and σ are constant in time, this becomes
where μ = cos θ and τ is the time in units of the relaxation time trel, where
The solution to this equation states that the expectation value of μ decays with time as
Hence, trel is the time constant for the binary's orientation to be randomized by torques from field stars.
Applications
Rotational Brownian motion was first discussed in the context of binary supermassive black holes at the centers of galaxies. Perturbations from passing stars can alter the orbital plane of such a binary, which in turn alters the direction of the spin axis of the single black hole that forms when the two coalesce.
Rotational Brownian motion is often observed in N-body simulations of galaxies containing binary black holes. The massive binary sinks to the center of the galaxy via dynamical friction where it interacts with passing stars. The same gravitational perturbations that induce a random walk in the orientation of the binary, also cause the binary to shrink, via the gravitational slingshot. It can be shown that the rms change in the binary's orientation, from the time the binary forms until the two black holes collide, is roughly
In a real galaxy, the two black holes would eventually coalesce due to emission of gravitational waves. The spin axis of the coalesced hole will be aligned with the angular momentum axis of the orbit of the pre-existing binary. Hence, a mechanism like rotational Brownian motion that affects the orbits of binary black holes can also affect the distribution of black hole spins. This may explain in part why the spin axes of supermassive black holes appear to be randomly aligned with respect to their host galaxies.
References
External links
Gravitational Scattering Review article on the dynamics of encounters between binaries and single stars.
Astrophysics
Celestial mechanics
Supermassive black holes | Rotational Brownian motion (astronomy) | [
"Physics",
"Astronomy"
] | 759 | [
"Black holes",
"Unsolved problems in physics",
"Classical mechanics",
"Supermassive black holes",
"Astrophysics",
"Celestial mechanics",
"Astronomical sub-disciplines"
] |
778,700 | https://en.wikipedia.org/wiki/Laws%20of%20thermodynamics | The laws of thermodynamics are a set of scientific laws which define a group of physical quantities, such as temperature, energy, and entropy, that characterize thermodynamic systems in thermodynamic equilibrium. The laws also use various parameters for thermodynamic processes, such as thermodynamic work and heat, and establish relationships between them. They state empirical facts that form a basis of precluding the possibility of certain phenomena, such as perpetual motion. In addition to their use in thermodynamics, they are important fundamental laws of physics in general and are applicable in other natural sciences.
Traditionally, thermodynamics has recognized three fundamental laws, simply named by an ordinal identification, the first law, the second law, and the third law. A more fundamental statement was later labelled as the zeroth law after the first three laws had been established.
The zeroth law of thermodynamics defines thermal equilibrium and forms a basis for the definition of temperature: if two systems are each in thermal equilibrium with a third system, then they are in thermal equilibrium with each other.
The first law of thermodynamics states that, when energy passes into or out of a system (as work, heat, or matter), the system's internal energy changes in accordance with the law of conservation of energy.
The second law of thermodynamics states that in a natural thermodynamic process, the sum of the entropies of the interacting thermodynamic systems never decreases. A common corollary of the statement is that heat does not spontaneously pass from a colder body to a warmer body.
The third law of thermodynamics states that a system's entropy approaches a constant value as the temperature approaches absolute zero. With the exception of non-crystalline solids (glasses), the entropy of a system at absolute zero is typically close to zero.
The first and second laws prohibit two kinds of perpetual motion machines, respectively: the perpetual motion machine of the first kind which produces work with no energy input, and the perpetual motion machine of the second kind which spontaneously converts thermal energy into mechanical work.
History
The history of thermodynamics is fundamentally interwoven with the history of physics and the history of chemistry, and ultimately dates back to theories of heat in antiquity. The laws of thermodynamics are the result of progress made in this field over the nineteenth and early twentieth centuries. The first established thermodynamic principle, which eventually became the second law of thermodynamics, was formulated by Sadi Carnot in 1824 in his book Reflections on the Motive Power of Fire. By 1860, as formalized in the works of scientists such as Rudolf Clausius and William Thomson, what are now known as the first and second laws were established. Later, Nernst's theorem (or Nernst's postulate), which is now known as the third law, was formulated by Walther Nernst over the period 1906–1912. While the numbering of the laws is universal today, various textbooks throughout the 20th century have numbered the laws differently. In some fields, the second law was considered to deal with the efficiency of heat engines only, whereas what was called the third law dealt with entropy increases. Gradually, this resolved itself and a zeroth law was later added to allow for a self-consistent definition of temperature. Additional laws have been suggested, but have not achieved the generality of the four accepted laws, and are generally not discussed in standard textbooks.
Zeroth law
The zeroth law of thermodynamics provides for the foundation of temperature as an empirical parameter in thermodynamic systems and establishes the transitive relation between the temperatures of multiple bodies in thermal equilibrium. The law may be stated in the following form:
Though this version of the law is one of the most commonly stated versions, it is only one of a diversity of statements that are labeled as "the zeroth law". Some statements go further, so as to supply the important physical fact that temperature is one-dimensional and that one can conceptually arrange bodies in a real number sequence from colder to hotter.
These concepts of temperature and of thermal equilibrium are fundamental to thermodynamics and were clearly stated in the nineteenth century. The name 'zeroth law' was invented by Ralph H. Fowler in the 1930s, long after the first, second, and third laws were widely recognized. The law allows the definition of temperature in a non-circular way without reference to entropy, its conjugate variable. Such a temperature definition is said to be 'empirical'.
First law
The first law of thermodynamics is a version of the law of conservation of energy, adapted for thermodynamic processes. In general, the conservation law states that the total energy of an isolated system is constant; energy can be transformed from one form to another, but can be neither created nor destroyed.
For processes that include the transfer of matter, a further statement is needed.
The First Law encompasses several principles:
Conservation of energy, which says that energy can be neither created nor destroyed, but can only change form. A particular consequence of this is that the total energy of an isolated system does not change.
The concept of internal energy and its relationship to temperature. If a system has a definite temperature, then its total energy has three distinguishable components, termed kinetic energy (energy due to the motion of the system as a whole), potential energy (energy resulting from an externally imposed force field), and internal energy. The establishment of the concept of internal energy distinguishes the first law of thermodynamics from the more general law of conservation of energy.
Work is a process of transferring energy to or from a system in ways that can be described by macroscopic mechanical forces acting between the system and its surroundings. The work done by the system can come from its overall kinetic energy, from its overall potential energy, or from its internal energy. For example, when a machine (not a part of the system) lifts a system upwards, some energy is transferred from the machine to the system. The system's energy increases as work is done on the system and in this particular case, the energy increase of the system is manifested as an increase in the system's gravitational potential energy. Work added to the system increases the potential energy of the system.
When matter is transferred into a system, the internal energy and potential energy associated with it are transferred into the new combined system. where denotes the internal energy per unit mass of the transferred matter, as measured while in the surroundings; and denotes the amount of transferred mass.
The flow of heat is a form of energy transfer. Heat transfer is the natural process of moving energy to or from a system, other than by work or the transfer of matter. In a diathermal system, the internal energy can only be changed by the transfer of energy as heat:
Combining these principles leads to one traditional statement of the first law of thermodynamics: it is not possible to construct a machine which will perpetually output work without an equal amount of energy input to that machine. Or more briefly, a perpetual motion machine of the first kind is impossible.
Second law
The second law of thermodynamics indicates the irreversibility of natural processes, and in many cases, the tendency of natural processes to lead towards spatial homogeneity of matter and energy, especially of temperature. It can be formulated in a variety of interesting and important ways. One of the simplest is the Clausius statement, that heat does not spontaneously pass from a colder to a hotter body.
It implies the existence of a quantity called the entropy of a thermodynamic system. In terms of this quantity it implies that
The second law is applicable to a wide variety of processes, both reversible and irreversible. According to the second law, in a reversible heat transfer, an element of heat transferred, , is the product of the temperature (), both of the system and of the sources or destination of the heat, with the increment () of the system's conjugate variable, its entropy ():
While reversible processes are a useful and convenient theoretical limiting case, all natural processes are irreversible. A prime example of this irreversibility is the transfer of heat by conduction or radiation. It was known long before the discovery of the notion of entropy that when two bodies, initially of different temperatures, come into direct thermal connection, then heat immediately and spontaneously flows from the hotter body to the colder one.
Entropy may also be viewed as a physical measure concerning the microscopic details of the motion and configuration of a system, when only the macroscopic states are known. Such details are often referred to as disorder on a microscopic or molecular scale, and less often as dispersal of energy. For two given macroscopically specified states of a system, there is a mathematically defined quantity called the 'difference of information entropy between them'. This defines how much additional microscopic physical information is needed to specify one of the macroscopically specified states, given the macroscopic specification of the other – often a conveniently chosen reference state which may be presupposed to exist rather than explicitly stated. A final condition of a natural process always contains microscopically specifiable effects which are not fully and exactly predictable from the macroscopic specification of the initial condition of the process. This is why entropy increases in natural processes – the increase tells how much extra microscopic information is needed to distinguish the initial macroscopically specified state from the final macroscopically specified state. Equivalently, in a thermodynamic process, energy spreads.
Third law
The third law of thermodynamics can be stated as:
At absolute zero temperature, the system is in the state with the minimum thermal energy, the ground state. The constant value (not necessarily zero) of entropy at this point is called the residual entropy of the system. With the exception of non-crystalline solids (e.g. glass) the residual entropy of a system is typically close to zero. However, it reaches zero only when the system has a unique ground state (i.e., the state with the minimum thermal energy has only one configuration, or microstate). Microstates are used here to describe the probability of a system being in a specific state, as each microstate is assumed to have the same probability of occurring, so macroscopic states with fewer microstates are less probable. In general, entropy is related to the number of possible microstates according to the Boltzmann principle
where S is the entropy of the system, kB is the Boltzmann constant, and Ω the number of microstates. At absolute zero there is only 1 microstate possible (Ω = 1 as all the atoms are identical for a pure substance, and as a result all orders are identical as there is only one combination) and .
Onsager relations
The Onsager reciprocal relations have been considered the fourth law of thermodynamics. They describe the relation between thermodynamic flows and forces in non-equilibrium thermodynamics, under the assumption that thermodynamic variables can be defined locally in a condition of local equilibrium. These relations are derived from statistical mechanics under the principle of microscopic reversibility (in the absence of external magnetic fields). Given a set of extensive parameters (energy, mass, entropy, number of particles and so on) and thermodynamic forces (related to their related intrinsic parameters, such as temperature and pressure), the Onsager theorem states that
where index every parameter and its related force, and
are called the thermodynamic flows.
See also
Chemical thermodynamics
Enthalpy
Entropy production
Ginsberg's theorem (Parody of the laws of thermodynamics)
H-theorem
Statistical mechanics
Table of thermodynamic equations
References
Further reading
Atkins, Peter (2007). Four Laws That Drive the Universe. OUP Oxford.
Goldstein, Martin & Inge F. (1993). The Refrigerator and the Universe. Harvard Univ. Press.
Guggenheim, E.A. (1985). Thermodynamics. An Advanced Treatment for Chemists and Physicists, seventh edition.
Adkins, C. J., (1968) Equilibrium Thermodynamics. McGraw-Hill
External links
Scientific laws | Laws of thermodynamics | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,554 | [
"Mathematical objects",
"Scientific laws",
"Equations",
"Thermodynamics",
"Laws of thermodynamics"
] |
779,061 | https://en.wikipedia.org/wiki/Phagemid | A phagemid or phasmid is a DNA-based cloning vector, which has both bacteriophage and plasmid properties. These vectors carry, in addition to the origin of plasmid replication, an origin of replication derived from bacteriophage. Unlike commonly used plasmids, phagemid vectors differ by having the ability to be packaged into the capsid of a bacteriophage, due to their having a genetic sequence that signals for packaging. Phagemids are used in a variety of biotechnology applications; for example, they can be used in a molecular biology technique called "phage display".
The term "phagemid" or "phagemids" was coined by a group of Soviet scientists, who discovered them, named them, and published the article in April 1984 in Gene magazine.
Properties of the cloning vector
A phagemid (plasmid + phage) is a plasmid that contains an f1 origin of replication from an f1 phage. It can be used as a type of cloning vector in combination with filamentous phage M13. A phagemid can be replicated as a plasmid, and also be packaged as single stranded DNA in viral particles. Phagemids contain an origin of replication (ori) for double stranded replication, as well as an f1 ori to enable single stranded replication and packaging into phage particles. Many commonly used plasmids contain an f1 ori and are thus phagemids.
Similarly to a plasmid, a phagemid can be used to clone DNA fragments and be introduced into a bacterial host by a range of techniques, such as transformation and electroporation. However, infection of a bacterial host containing a phagemid with a 'helper' phage, for example VCSM13 or M13K07, provides the necessary viral components to enable single stranded DNA replication and packaging of the phagemid DNA into phage particles. The 'helper' phage infects the bacterial host by first attaching to the host cell's pilus and then, after attachment, transporting the phage genome into the cytoplasm of the host cell. Inside the cell, the phage genome triggers production of single stranded phagemid DNA in the cytoplasm. This phagemid DNA is then packaged into phage particles. The phage particles containing ssDNA are released from the bacterial host cell into the extracellular environment.
Filamentous phages retard bacterial growth but, contrasting with the lambda phage and the T7 phage, are not generally lytic. Helper phages are usually engineered to package less efficiently (via a defective phage origin of replication) than the phagemid so that the resultant phage particles contain predominantly phagemid DNA. F1 Filamentous phage infection requires the presence of a pilus so only bacterial hosts containing the F-plasmid or its derivatives can be used to generate phage particles.
Prior to the development of cycle sequencing, phagemids were used to generate single stranded DNA template for sequencing purposes. Today phagemids are still useful for generating templates for site-directed mutagenesis. Detailed characterisation of the filamentous phage life cycle and structural features lead to the development of phage display technology, in which a range of peptides and proteins can be expressed as fusions to phage coat proteins and displayed on the viral surface. The displayed peptides and polypeptides are associated with the corresponding coding DNA within the phage particle and so this technique lends itself to the study of protein-protein interactions and other ligand/receptor combinations.
References
Cloning
Molecular biology | Phagemid | [
"Chemistry",
"Engineering",
"Biology"
] | 795 | [
"Cloning",
"Biochemistry",
"Genetic engineering",
"Molecular biology"
] |
779,338 | https://en.wikipedia.org/wiki/Physics%20of%20computation | The study of the physics of computation relates to understanding the fundamental physical limits of computers. This field has led to the investigation of how thermodynamics limits information processing, the understanding of chaos and dynamical systems, and a rapidly growing effort to invent new quantum computers.
See also important publications in physics of computation
See also
Digital physics
Computation
Theory of computation
Reversible computation
Hypercomputation
Limits to computation
Bremermann's limit
Bekenstein bound
References
Lloyd, S., 2000, Ultimate physical limits of computation, Nature, 406:1047-1054.
Computational physics
Applied and interdisciplinary physics | Physics of computation | [
"Physics"
] | 126 | [
"Applied and interdisciplinary physics",
"Computational physics stubs",
"Computational physics"
] |
779,587 | https://en.wikipedia.org/wiki/Physical%20simulation | Dynamical simulation, in computational physics, is the simulation of systems of objects that are free to move, usually in three dimensions according to Newton's laws of dynamics, or approximations thereof. Dynamical simulation is used in computer animation to assist animators to produce realistic motion, in industrial design (for example to simulate crashes as an early step in crash testing), and in video games. Body movement is calculated using time integration methods.
Physics engines
In computer science, a program called a physics engine is used to model the behaviors of objects in space. These engines allow simulation of the way bodies of many types are affected by a variety of physical stimuli. They are also used to create Dynamical simulations without having to know anything about physics. Physics engines are used throughout the video game and movie industry, but not all physics engines are alike. They are generally broken into real-time and the high precision, but these are not the only options. Most real-time physics engines are inaccurate and yield only the barest approximation of the real world, whereas most high-precision engines are far too slow for use in everyday applications.
To understand how these Physics engines are built, a basic understanding of physics is required. Physics engines are based on the actual behaviors of the world as described by classical mechanics. Engines do not typically account for Modern Mechanics (see Theory of relativity and quantum mechanics) because most visualization deals with large bodies moving relatively slowly, but the most complicated engines perform calculations for Modern Mechanics as well as Classical. The models used in Dynamical simulations determine how accurate these simulations are.
Particle model
The first model which may be used in physics engines governs the motion of infinitesimal objects with finite mass called “particles.” This equation, called Newton’s Second law (see Newton's laws) or the definition of force, is the fundamental behavior governing all motion:
This equation allows us to fully model the behavior of particles, but it is not sufficient for most simulations because it does not account for the rotational motion of rigid bodies. This is the simplest model that can be used in a physics engine and was extensively used in early video games
Inertial model
Bodies in the real world deform as forces are applied to them, so we call them “soft,” but often the deformation is negligibly small compared to the motion, and it is very complicated to model, so most physics engines ignore deformation. A body that is assumed to be non-deformable is called a rigid body. Rigid body dynamics deals with the motion of objects that cannot change shape, size, or mass but can change orientation and position.
To account for rotational energy and momentum, we must describe how force is applied to the object using a moment, and account for the mass distribution of the object using an inertia tensor. We describe these complex interactions with an equation somewhat similar to the definition of force above:
where is the central inertia tensor, is the angular velocity vector, and is the moment of the jth external force about the mass center.
The inertia tensor describes the location of each particle of mass in a given object in relation to the object's center of mass. This allows us to determine how an object will rotate dependent on the forces applied to it. This angular motion is quantified by the angular velocity vector.
As long as we stay below relativistic speeds (see Relativistic dynamics), this model will accurately simulate all relevant behavior. This method requires the Physics engine to solve six ordinary differential equations at every instant we want to render, which is a simple task for modern computers.
Euler model
The inertial model is much more complex than we typically need but it is the most simple to use. In this model, we do not need to change our forces or constrain our system. However, if we make a few intelligent changes to our system, simulation will become much easier, and our calculation time will decrease. The first constraint will be to put each torque in terms of the principal axes. This makes each torque much more difficult to program, but it simplifies our equations significantly. When we apply this constraint, we diagonalize the moment of inertia tensor, which simplifies our three equations into a special set of equations called Euler's equations. These equations describe all rotational momentum in terms of the principal axes:
The N terms are applied torques about the principal axes
The I terms are the principal moments of inertia
The terms are angular velocities about the principal axes
The drawback to this model is that all the computation is on the front end, so it is still slower than we would like. The real usefulness is not apparent because it still relies on a system of non-linear differential equations. To alleviate this problem, we have to find a method that can remove the second term from the equation. This will allow us to integrate much more easily. The easiest way to do this is to assume a certain amount of symmetry.
Symmetric/torque model
The two types of symmetric objects that will simplify Euler's equations are “symmetric tops” and “symmetric spheres.” The first assumes one degree of symmetry, this makes two of the I terms equal. These objects, like cylinders and tops, can be expressed with one very simple equation and two slightly simpler equations. This does not do us much good, because with one more symmetry we can get a large jump in speed with almost no change in appearance. The symmetric sphere makes all of the I terms equal (the Moment of inertia scalar), which makes all of these equations simple:
The N terms are applied torques about the principal axes
The terms are angular velocities about the principal axes
The I term is the scalar Moment of inertia:
where
V is the volume region of the object,
r is the distance from the axis of rotation,
m is mass,
v is volume,
ρ is the pointwise density function of the object,
x, y, z are the Cartesian coordinates.
These equations allow us to simulate the behavior of an object that can spin in a way very close to the method simulate motion without spin. This is a simple model but it is accurate enough to produce realistic output in real-time Dynamical simulations. It also allows a Physics engine to focus on the changing forces and torques rather than varying inertia.
See also
Bounding volume
Collision detection
Euler's equations (rigid body dynamics)
Moment of inertia
Physics Abstraction Layer
Physics engine
Rigid body dynamics
References
Computational physics
Computer physics engines | Physical simulation | [
"Physics"
] | 1,336 | [
"Physical phenomena",
"Classical mechanics",
"Computational physics",
"Motion (physics)",
"Dynamics (mechanics)"
] |
780,087 | https://en.wikipedia.org/wiki/Smart%20fluid | A smart fluid is a fluid whose properties (e.g. viscosity) can be changed by applying an electric field or a magnetic field.
The most developed smart fluids today are fluids whose viscosity increases when a magnetic field is applied. Small magnetic dipoles are suspended in a non-magnetic fluid, and the applied magnetic field causes these small magnets to line up and form strings that increase the viscosity. These magnetorheological or MR fluids have been used in the suspension of the 2002 model of the Cadillac Seville STS automobile and more recently, in the suspension of the second-generation Audi TT. Depending on road conditions, the fluid's damping viscosity can be adjusted. This is more expensive than traditional systems, but it provides better (faster) control. Similar systems are being explored to reduce vibration in washing machines, air conditioning compressors, rockets and satellites, and one has even been installed in Japan's National Museum of Emerging Science and Innovation in Tokyo as an earthquake shock absorber.
Some haptic devices whose resistance to touch can be controlled are also based on these MR fluids.
Another major type of smart fluid are electrorheological or ER fluids, whose resistance to flow can be quickly and dramatically altered by an applied electric field (note, the yield stress point is altered rather than the viscosity). Besides fast acting clutches, brakes, shock absorbers and hydraulic valves, other, more esoteric, applications such as bulletproof vests have been proposed for these fluids.
Other smart fluids change their surface tension in the presence of an electric field. This has been used to produce very small controllable lenses: a drop of this fluid, captured in a small cylinder and surrounded by oil, serves as a lens whose shape can be changed by applying an electric field.
Background
The properties of smart fluids have been known for around sixty years, but were subject to only sporadic investigations up until the 1990s, when they were suddenly the subject of renewed interest, notably culminating with the use of an MR fluid on the suspension of the 2002 model of the Cadillac Seville STS automobile and more recently, on the suspension of the second-generation Audi TT. Other applications include brakes and seismic dampers, which are used in buildings in seismically-active zones to damp the oscillations occurring in an earthquake. Since then it appears that interest has waned a little, possibly due to the existence of various limitations of smart fluids which have yet to be overcome.
See also
Continuum mechanics
Electrorheological fluid
Ferrofluid
Fluid mechanics
Magnetorheological fluid
Rheology
Smart glass
Smart metal
References
Smart materials
Fluid dynamics | Smart fluid | [
"Chemistry",
"Materials_science",
"Engineering"
] | 535 | [
"Chemical engineering",
"Materials science",
"Piping",
"Smart materials",
"Fluid dynamics"
] |
782,069 | https://en.wikipedia.org/wiki/James%20Bay%20Project | The James Bay Project () refers to the construction of a series of hydroelectric power stations on the La Grande River in northwestern Quebec, Canada by state-owned utility Hydro-Québec, and the diversion of neighbouring rivers into the La Grande watershed. It is located between James Bay to the west and Labrador to the east, and its waters flow from the Laurentian Plateau of the Canadian Shield. The project is one of the largest hydroelectric systems in the world. It has cost upwards of US$20 billion to build and has an installed generating capacity of 15.244 GW, at the cost of 7,000 square miles of Cree hunting lands. It has been built since 1974 by James Bay Energy () for Hydro-Québec.
Construction costs of the project's first phase in ≈ 1971 amounted to $13.7 billion (1987 Canadian dollars). The eight power stations of the La Grande Complex generate an average of 9.5 GW, enough to meet the total demand of a small industrialized economy such as Belgium. The James Bay power stations represent almost half of Hydro-Québec's total output and capacity.
The development of the James Bay Project was controversial. It led to an acrimonious conflict with the 5,000 Crees and 4,000 Inuit of Northern Quebec over land rights, lifestyle and environmental issues. A ruling against the Quebec government in 1973 forced the Robert Bourassa government to negotiate a far-reaching agreement, the James Bay and Northern Quebec Agreement, involving the Cree, the Inuit, the Quebec and Canadian governments, Hydro-Québec, the SEBJ, and later the Naskapi First Nations. In the 1990s, forceful opposition by the Crees and their environmental allies caused the cancellation of the Great Whale Project, a proposed 3,000 MW complex north of La Grande River.
In February 2002, the Bernard Landry government and the Grand Council of the Crees signed the Peace of the Braves () and the Boumhounan Agreement, establishing a new relationship between Quebec and the Crees and agreeing on environmental rules for the construction of three new power stations built between 2003 and 2011 — the Eastmain-1, Eastmain-1-A and Sarcelle generating stations — and the diversion of the Rupert River.
Geography
The James Bay region, also known as Jamésie, is a territory, bordered by the 49th and 55th parallels, James Bay on the western side and by the drainage divide with the Saint Lawrence River basin on the eastern side. The topography of the area consist of generally low relief areas and includes three parts: a coastal plain, a rolling plateau with a maximum elevation of and the Otish Mountains to the east of the territory, with peaks reaching .
The area is part of the Canadian Shield and is largely made up of Precambrian igneous and metamorphic rocks. Relief has been eroded by successive glaciations in the Pleistocene era, as recently as 6,000 years ago, leaving depositions of loose materials: moraines, clay, silt and sand and reshaped the hydrography of the territory.
The region's climate is subarctic. Winters are long and last, on average, from October 22 to May 4. Summers are short and mild, with temperatures averaging in July, while dropping to in January. Annual precipitation averages , a third in snow. Highest monthly rainfall is registered in the summer and snow depths vary from in the winter. Precipitations are significantly lower than the annual average of recorded in Montreal. The area lies in the zone of discontinuous permafrost, whose depth is significantly reduced by the deep snow cover.
The natural seismicity of the area is low. An earthquake of magnitude 5 on the Richter magnitude scale occurred in 1941, its epicenter located approximately 150 km from the La Grande-3 generating station. However, episodes of induced seismicity occurred during the initial fill of reservoirs. In 1983, a magnitude 4 tremor was recorded upstream of LG-3's main dam.
History
Exploration
Between 1950 and 1959, a team led by H. M. Finlayson conducted water surveys of the Nottaway, Broadback and Rupert Rivers—collectively known by the abbreviation NBR—on behalf of the Shawinigan Water & Power Company, a large investor-owned utility based in Shawinigan, Quebec. Among options studied by Shawinigan's engineers was the possible diversion of these rivers to the Saint-Maurice River watershed in order to increase output at the company's 8 power stations.
With the nationalization of privately owned utilities in 1963, Hydro-Québec inherited the preliminary studies conducted by Finlayson and his team on the hydroelectric potential of James Bay rivers. However, other projects, such as the Manicouagan-Outardes project on the North Shore and the possibility of building a large power station at Churchill Falls in Labrador proved easier and less expensive and the Crown corporation devoted only minimal resources to the vast potential of northern rivers. In 1965, Hydro-Québec survey program included exploration of the territory and hydrographic surveys of areas between the 52nd and 55th parallel.
In 1967, the company stepped up the work on the La Grande and Eastmain rivers. Dozens, then hundreds of people were sent by helicopter and seaplanes in inaccessible areas of the taiga to perform surveys and geological studies to identify potential sites for hydropower development. Faced with budget concerns, Hydro-Québec did cut back exploration budgets between 1968 and 1970, but the company maintained planning and analysis work, since early data showed a large potential for development.
Early steps
On December 16, 1969, Liberal Backbencher Member of the National Assembly Robert Bourassa met with the president of Hydro-Québec, Roland Giroux over lunch at the parliamentary dining room in Quebec City. After the meeting Bourassa, who was about to launch a leadership bid for the position left vacant by the resignation of former Premier Jean Lesage, became convinced of the probability and suitability of the project and made the development of James Bay hydroelectricity a major plank of its leadership campaign. Elected as the party leader in January, Bourassa went on to win the general election on April 29, 1970, and his tenure as Premier of Quebec became closely linked to hydroelectric development in general and with the James Bay project in particular.
For Bourassa the development of James Bay project addressed two of his priorities. In Energy in the North, an essay published in 1985, Bourassa, an economist by profession, argued that "Quebec's economic development relies on the development of its natural resources". Moreover, Bourassa argued his 1969 estimates showed demand for electricity would outstrip supply by 11,000 MW by 1983, concurring with forecasts made at the time by Hydro-Quebec.
Six months after his election, Bourassa began working on the details of the scheme with his adviser, financier Paul Desrochers. The two men met secretly with Roland Giroux and Robert A. Boyd for an update in September 1970 and the next month he travelled to New York City in the midst of the October Crisis to negotiate financing for the project, estimated at the time to cost between $5 billion and $6 billion.
Bourassa introduced his plan to the provincial cabinet in March 1971 and recommended hiring the US engineering firm Bechtel to oversee the construction. Liberal strategists then chose to make the announcement before a partisan crowd assembled at Quebec's Little Coliseum as part of the Liberal party gathering celebrating the first year of Bourassa's term, on April 30, 1971. According to journalists witnessing the scene, Bourassa's speech concluded on a scene of indescribable enthusiasm.
Nuclear lobby
The announcement quickly generated a public debate on the wisdom to engage the province on such a large-scale project. For several years, a lobby spearheaded by the Canadian government and its nuclear venture, Atomic Energy of Canada Limited, promoted the adoption of nuclear energy in Quebec, as a way to "share the benefits of Canada with our fellow francophone citizens", as Canadian Prime Minister Lester B. Pearson said. The lobby had its supporters within the ranks at Hydro-Québec, and has been vocal when the provincial government made the decision to invest in the Churchill Falls venture with Brinco. Several Parti Québécois spokesmen, including energy critic Guy Joron and economic adviser Jacques Parizeau voiced their opposition to the Bourassa scheme. In an interview with Montreal's Le Devoir, the former economist and public servant who later became premier of Quebec commented: "We don't have to dam every single river just because they're French Canadian and Catholic."
However, Bourassa himself and Hydro-Québec senior management — including President Roland Giroux and commissioners Yvon DeGuise and Robert Boyd — were firmly behind the large hydroelectric development to be built in northern Quebec. At the time Giroux, a financier, argued that large international investors "are still wary about nuclear energy. If we bring them a good hydroelectric project, and James Bay is a good one, they'll soon show where their preferences lie". As an engineer, Boyd expressed concerns at this early date about the uncertainty of nuclear energy. He recommended maintaining a certain expertise in the field but advocated delaying nuclear expansion as late as possible.
The Quebec premier received an unexpected backing when the Chairman of the Council of Ministers of the USSR, Alexei Kosygin visited Montreal in October 1971. Kosygin supported Bourassa's project and expressed concerns regarding his country's own nuclear power, explaining his country had to develop the technology because the USSR lacked suitable rivers to expand its own hydroelectric network of dams and power stations.
Options
Two options were considered when Bourassa unveiled his plan for the construction of several large hydroelectric power stations on the rivers flowing into James Bay, either on the Nottaway, Broadback, Rupert and Harricana Rivers in the south (NBR Project), or on the La Grande and Eastmain Rivers to the north. The northerly rivers were selected in May 1972, various studies conducted by engineering firms having concluded the La Grande option would be more cost effective, while having a lesser impact on forestry and would require less flooding, thus minimizing impacts on First Nations fishing and hunting. Another area of concern was the silty nature of the terrain in the NBR area, which would have complicated the damming.
The project, as described at the time, would involve the construction of four generating stations on the La Grande River and the diversion of the Eastmain and Caniapiscau rivers into the La Grande watershed. Responsibility for the project would be overseen by the Société d'énergie de la Baie-James, a newly created mixed corporation (public/private) controlled by Hydro-Québec, headed by Robert A. Boyd.
As environmental assessments were not then required under Quebec law, construction of the James Bay Road to the La Grande River was begun in 1971 and completed by October 1974 at a cost of about $400 million. In 1973 and 1974, a temporary winter ice road was used to bring in the heavy equipment required for the construction of the roadbed and some 13 major bridges spanning the many rivers of the region.
Construction had boomed in Montreal for Expo 67, leading to an inflated workforce. In the following years, the decreased demand for labor meant that times were tough for the construction industry in Montreal. As Bourassa had promised in the 1970 election that his government would create 100,000 jobs in the construction industry, there was much violent competition between various construction unions to have their workers engaged in the James Bay Project. Canadian historian Desmond Morton noted that there were 540 different incidents between the two main construction unions in Quebec on sites associated with the James Bay Project between 1970 and 1974, many of them "very bloody". In the 1973 election, after the Fédération des travailleurs et travailleuses du Québec (FTQ) union had donated generously to the Parti libéral du Québec, Bourassa announced that only companies employing workers from the FTQ-affiliated Conseil des métiers de la construction headed by André "Dédé" Desjardins would work on the James Bay project. In March 1974, when one sub-contractor refused to fire two workers belonging to the rival CSN union, the FTQ workers destroyed the LG-2 site, causing $35 million in damage. On 21 March 1974, the workers on the LG-2 site rioted and used their bulldozers to destroy the site that they were working on while other workers set buildings afire.
In response to the riot at the LG-2 site, Bourassa created a royal commission headed by Judge Robert Cliche, the union official Guy Chevrette and a prominent Montreal labor lawyer Brian Mulroney to examine the question of freedom of expression within Quebec construction unions. The Cliche commission as it became known found widespread corruption within the construction unions as the columnist Peggy Curran wrote that the Cliche commission uncovered "...tales of nepotism, bribery, sabotage, blackmail and intimidation; charges of union organizers with criminal records who gave lessons in how to break legs; thugs-for-hire who would happily beat up a rival union organizer’s teenager or strangle their dog." Desjardins was called before the Cliche commission several times starting in November 1974, where it was established that he was closely associated with the Montreal Mafia, and engaged in thuggish practices as president of the Conseil des métiers de la construction union.
Although the Aboriginal Crees had traditional hunting and trapping areas in the region, no seasonal or permanent roads existed at the time. Opposition to the project, however, was strong among the 5,000 Crees of James Bay, the 3,500 Inuit to the north and several environmental groups. They believed the government of Quebec was acting in violation of treaties and committing unlawful expropriation and destruction of traditional hunting and trapping lands. Furthermore, the Cree and Inuit had not been informed of the hydroelectric project until after construction of the access road had begun. The federal Indian affairs minister Jean Chrétien intervened on the side of the Cree and the Inuit, hiring lawyers to argue their case in the courts. Both Bourassa and the Prime Minister, Pierre Trudeau were Liberals and federalists, but relations between the two were very strained at best as the French-Canadian nationalist Bourassa was a "soft federalist" who favored devolving the powers of the federal government down to the provinces while the Canadian nationalist Trudeau was a "hard federalist" who favored concentrating power in the hands of the federal government. Relations between Quebec City and Ottawa were brought to the breaking point in 1971 when Bourassa vetoed the Victoria charter for patriating the British North America Act to give Canada its own constitution on the grounds that if the British North American Act was going to be changed, then the federal government should cede more powers to the provinces. The willingness of the Trudeau government to intervene on the side of the Cree and Inuit against the Quebec government was at least in part caused by the feud between Bourassa and Trudeau.
In a speech championing the Cree, Chrétien said Bourassa "could go to hell", charging that he did not have the right to build on or flood the land claimed by the Cree. In 1973, the federal government's lawyers won a court injunction ordering the James Bay project stopped until a treaty could be signed with the Cree and Inuit, but an appeals court overturned the ruling days later. However, Bourassa agreed to negotiate with the First Nations as the federal government announced it was willing to take the matter to the Supreme Court. In later years, the Cree and Inuit were given a settlement of $150 million, negotiated by Cree chief Billy Diamond.
In November 1975, the governments of Canada and Quebec signed the James Bay and Northern Quebec Agreement with the Cree of the James Bay region and the Inuit of northern Quebec, affirming exclusive hunting and fishing rights to about 170,000 km2 of territory and about $250 million in financial compensation in return for the right to develop the hydroelectric resources of Northern Quebec. The planned La Grande-1 power station would be built about 50 km further away from the Cree village of Chisasibi than originally planned. The Agreement also provided for an extensive environmental follow-up of all aspects of the hydroelectric development on the La Grande and Eastmain rivers and the establishment of a joint environmental assessment process for any future hydroelectric project involving other rivers of Northern Quebec.
The project
Phase I
The period of construction of the first phase of the project covered about 14 years. By 1986, the largest power stations and reservoirs on the La Grande River were mostly completed, including the Robert-Bourassa (originally named La Grande-2), La Grande-3 and La Grande-4 generating stations, with an installed capacity of 10,800 MW, and five reservoirs covering an area of 11,300 km2. The Eastmain and Caniapiscau river diversions each added about 800 m3/s of water to the La Grande River. The power plants of the first phase of the James Bay Project produce about 65 TWh of power each year, operating at about 60% of their maximum rated generating capacity.
During this first phase of construction, over of fill, 138,000 tons of steel, 550,000 tons of cement, and nearly 70,000 tons of explosives were used. Concurrent employment by the project reached 18,000. Of the 215 dikes and dams, many surpassed the height of skyscrapers, with one reaching 56 stories. The terraced diversion channel at Robert-Bourassa generating station was carved 30 m (one hundred feet) deep into the side of a mountain. Water tumbles from the reservoir to the river below at a height greater than that of Niagara Falls. A network of transmission lines was necessary to bring generated power to consumers in southern Quebec. The network contains several 735-kilovolt lines and one 450-kilovolt DC line directly linked to the U.S. power grid.
Phase II
During the late 1980s and early 1990s, construction of the second phase of the James Bay project centred on the construction of five secondary power plants on the La Grande River and its tributaries (La Grande-1, La Grande-2A, Laforge-1, Laforge-2 and Brisay), adding a further 5,200 MW of generating capacity by the end of 1996. Premier Bourassa estimated that this phase would create 40,000 construction job-years (equivalent to 4,000 jobs lasting 10 years). Three new reservoirs covering an area of 1,600 km2 were created, including the Laforge-1 Reservoir covering 1,288 km2. The generating plants of this second phase of the project produce about 18.9 TWh of power per year, operating at between 60% and 70% of their maximum rated generating capacity.
On March 13, 1989, a massive solar storm caused a failure of the La Grande complex, plunging most of Quebec into darkness for nine hours.
Great Whale River project
During the construction of the second phase of the James Bay Project, Hydro-Québec proposed an additional project on the Great Whale River (French: Grande rivière de la Baleine), just to the north of the La Grande River watershed. Opposition among the Cree was even more vocal this time than in the early 1970s. In 1990, Grand Chief Matthew Coon Come organized a canoe trip from Hudson Bay to the Hudson River, in Albany, New York, and this very effective public relations stunt brought international pressure to bear on the government of Quebec. The Cree had experienced considerable culture shock with the introduction of permanent transportation routes to the south and very few Cree were employed on the construction site. Poverty and social problems remained prevalent in the isolated Cree and Inuit villages of Northern Quebec, even in areas where there were no hydroelectric or mining activities.
By the 1980s, the natural ebb and flow of the La Grande, Eastmain and Caniapiscau rivers had been severely modified, notably delaying the formation of a solid ice cover near the Cree village of Chisasibi, and about 4% of the traditional hunting and trapping territories of the Cree had been lost to the rising waters of the reservoirs, including about 10% of the territories of the Cree village of Chisasibi. At the same time, new roads, snowmobiles and bush airlines facilitated access to distant hunting territories of the interior. While highly motivated, the Cree's opposition to the Great Whale River Project was mainly ineffective until 1992 when the State of New York withdrew from a multibillion-dollar power purchasing agreement due to public outcry and a decrease in energy requirements. In 1994, the Government of Quebec and Hydro-Québec suspended the project indefinitely.
Rupert River diversion
In 2002, the Quebec government and the Grand Council of the Crees signed a landmark agreement, "La Paix des Braves" (literally "The Peace of the Braves"), ensuring the completion of the last phase of the original James Bay Project: construction of the Eastmain-1 generating station, with a capacity of 480 MW, and the Eastmain Reservoir with a surface area of about .
A subsequent agreement in April 2004 put an end to all litigation between the two parties and opened the way to a joint environmental assessment of the projected diversion of the Rupert River, to the south of the Eastmain River. The project entails the diversion of about 50% of the total water flow of the Rupert River (and 70% of the flow at the diversion point) towards the Eastmain Reservoir and into the La Grande Complex, and the construction of two additional generating stations: Eastmain-1A and Sarcelle, with a combined capacity of 888 MW. The Rupert diversion would generate a total of 8.5 TWh of electricity at the new and existing power stations.
Former Grand Chief of the Crees (Eeyou Istchee) Matthew Mukash (elected in late 2005 and served until 2009) opposed the Rupert River diversion and favoured the construction of wind turbines.
Hydro-electric installations
The hydro-electric stations in the La Grande watershed are:
La Grande-1 generating station
Robert-Bourassa generating station (formerly La Grande-2)
La Grande-2-A generating station
La Grande-3 generating station
La Grande-4 generating station
Laforge-1 generating station
Laforge-2 generating station
Brisay generating station
Eastmain-1 generating station
Eastmain-1-A generating station
Sarcelle generating station
Environmental impacts
Although there was no environmental impact assessment legislation before the James Bay Project's initial construction phase in the 1970s, a major environmental research program was conducted before Phase I began.
The environmental impacts of the James Bay Project largely stem from the creation of a complex chain reservoir through the integration of all the watersheds of the eastern shores of the Hudson Bay, from the southern tip of James Bay to Ungava Bay in the north. This has had the consequence of diverting the flow of water from four major rivers into a large body of water, ultimately changing the dynamics of the land, an environmental political phenomenon labelled by some critics as a "first build, then paint green" policy.
Mercury pollution
Two of these main diverted rivers are the Caniapiscau River and the Eastmain River into which the James Bay Project submerged about 11,000 km2 of boreal forest (taiga). Consequently, the flooded vegetation's stored mercury (Hg) was released into the aquatic ecosystem, and due to the diversion of the water flow to contained reservoirs, the sudden abundance of mercury in the James Bay area in 1979 was unable to be dispersed and diluted as would have been the case in natural waters. Because the James Bay Cree (East Cree) live a mostly traditional lifestyle including a diet rich in fish and sea mammals, there is a possibility that the damming project has contributed to northern Quebec's Cree having the highest measured methyl-mercury concentration of all Canadian First Nations. Because of the simultaneous mercury contamination in James Bay from other activities in the area, including paper milling, the direct effect of the project on mercury levels has been difficult to ascertain. From 1981 to 1982, a few years after the flooding of La Grande River, mercury levels in lake whitefish (Coregonus clupeaformis) increased up to fourfold their pre-flooding levels, while those in northern pike (Esox lucius) rose up to sevenfold during the same period. In natural lakes, these concentrations are five to six times less than in the James Bay area. This rapid spike of mercury levels in two of the fish species used extensively by the area's Cree is attributed to the processes of bioaccumulation and biomagnification. Biaccumulation is the initial consequence of mercury pollution, as the toxin is first incorporated into the given ecosystem's producers. In the James Bay area ecosystem, mercury being released from the decaying flooded trees would be incorporated in trace amounts in zooplankton. Benthic organisms (benthos), the whitefish's primary prey, consume a great deal of zooplankton, causing the mercury concentration in a single organism to magnify due to accumulation of mercury and its inability to be excreted. In turn, whitefish, due to their greater size, consume large numbers of benthic invertebrates, thus incorporating the individual mercury accumulations of each organism and creating their own store of mercury. The effect is further exacerbated by humans consuming this built up store of mercury. The James Bay Mercury Agreement, signed in 1986 between the Grand Council of the Crees (of Québec), the Cree Regional Authority, the Cree Bands, the Government of Québec, Hydro-Québec and the Société d’énergie de la Baie James (James Bay Energy), aims "to restore and strengthen Cree fisheries [...] but [...] also adequately take into account the health risks associated with human exposure to mercury."
Local climate changes
The establishment of reservoirs containing large amounts of standing water has the ability to produce local climate changes. Alteration of annual precipitation patterns, increased abundance of low stratus clouds and fog, and warmer autumns and cooler springs, leading to a delay in the beginning and end of the growing season, have all been observed in the vicinity of the project's major reservoirs. The doubling of the freshwater input into James Bay during the winter decreases the salinity of the seawater, thereby increasing the freezing point of the bay. The resultant increased ice content at the northern section of the project in the winter has cooled warm air currents more than usual, bringing harsher Arctic weather, including strong winds and less precipitation, to south-central Quebec. The tree line at the southern edge of the development has shifted noticeably southward since the project's construction.
Water flow modifications
Following construction of the project, the area's water flow was substantially modified. In the James Bay area in general, the average monthly surface runoff rate in the winter increased by 52%, doubling the total freshwater input, while that of the summer months decreased by 6%. The James Bay area's water flow is most affected by the hydroelectric project from January to April because rivers have their lowest runoff rates in the winter months when freezing occurs. Additionally, runoff rates in the damming system can be altered to meet power needs, which are highest in the winter and lowest in the summer, thereby more completely reversing the natural water flow cycle. As evidenced by the 500% increase in its winter runoff, the La Grande River is the pillar of the James Bay project's hydroelectric capacity, with the runoff increasing from an average yearly amount of 1,700 m3/s to 3,400 m3/s, and from 500 m3/s to 5,000 m3/s in the winter. This immense harnessing of the area's energy at La Grande was made possible by reducing the Eastmain River's water flow at its mouth by 90% and by reducing that of the Caniaspiscau River's by 45%, and then by diverting these rivers into La Grande. Not only does this alter the runoff amount of the Eastmain and the Caniaspiscau Rivers, but also their drainage location, since prior to having been directly merged with La Grande, these rivers’ drainage locations were separate from the La Grande River. The summer runoff rate of La Grande increased by 40%, making the average annual runoff rate 91% greater than its natural rate.
Because of the change in the runoff rates of James Bay, massively increasing in the winter months, and increasing considerably in the summer as well, there has been more extreme fluctuation in the water levels. This has killed many trees along the shoreline, which are not equipped with deep enough root systems and tolerance of prolonged exposure to seawater to withstand these fluctuations. As well, the increased riverbank erosion downstream of the dams has washed the flora’s habitat down the river. The result has been considerable decay (decomposition) of dead trees along the shoreline, consequently releasing stored mercury into the area's terrestrial ecosystem through bioaccumulation in decomposers and detritovores and eventual biomagnification up the food web. This has left the area's Cree susceptible to mercury poisoning from both land and sea. Any shoreline plants that could potentially provide vegetation growth to replace any of the lost wetland habitats in these zones of periodic fluctuations are destroyed.
Changes in migration routes
Other changes in the delicate balance of the James Bay ecosystem can be illustrated through the animal migration patterns, salmon spawning, and destruction of wildlife habitats. The significant loss of wetlands and the blocking of passageways to those wetlands that remain has inhibited salmon spawning and migration in the James Bay area. Additionally, diverting rivers towards the James Bay could cause changes in the geographical pattern of river water discharge into the sea.[36]
Caribou populations, which have been expanding since the 1950s, have adopted migration routes throughout much of the Quebec-Labrador Peninsula and have thus been increasingly abundant in the James Bay area, the valley of the Caniapiscau, and around George River (Quebec).[37] Variations in the water flow of the Caniapiscau River from 1981 to 1984, during the period when the Caniapiscau Reservoir was being filled, may have contributed to the death by drowning of 10,000 migratory woodland caribou in September 1984, representing about 1.5% of the herd at that time. On the other hand, the reduced flow of the Caniapiscau River and the Koksoak River has permanently reduced the risk of natural floods on the lower Caniapiscau during the period of caribou migrations, giving hunters greater access to caribou than ever before. About 30,000 caribou are killed each year by Inuit, Cree and American and European hunters.
Seasonal reversal in the flow of rivers can potentially rob the rich nutrients that thrive in various mudflats and coastal marshes, affecting millions of migratory birds such as waterfowl, Canada geese, and various inland birds that use the coastlines of both the James and Hudson Bays during their spring and fall migrations.[38]
Social impact
The James Bay and Northern Quebec Agreement provided considerable financial and administrative resources for the Cree and Inuit communities to deal with the environmental and social consequences of the project and provide for future economic development, such as the creation of the local airline Air Creebec. The James Bay Project also was an impetus for the forging of a collective identity among the Cree of Quebec and for the establishment of the Grand Council of the Crees (Eeyou Istchee). The Agreement notably provided for major institutional structures for local government, economic development, schools and health services, mostly under the control of the Grand Council of the Crees and the Kativik Regional Government, in Nunavik.
Yet, the social consequences of the hydroelectric project itself pale in comparison to the social impact of the Cree coming into direct contact with the society and economic forces of francophone Quebec. The greatest impact stems from the construction in the early 1970s of the James Bay Road (Route de la Baie James) from Matagami to the new town of Radisson, near the Robert-Bourassa generating station (La Grande-2), and on to the nearby Cree village of Chisasibi. During the main construction period of the late 1970s, Radisson housed a population several times greater than the Cree population of Chisasibi, although it currently has a population of about 500.
Nevertheless, the Cree communities have themselves continued the push to build additional roads from the James Bay Road westward to the Cree coastal villages of Wemindji, Eastmain and Waskaganish. These roads, opened between 1995 and 2001, have further facilitated access to hunting areas of the interior and encouraged commercial and social exchanges between the Cree villages and with southern Quebec. A separate road (Route du Nord) also links the James Bay Road to Chibougamau, via the Cree village of Nemaska. The building of these newer roads was largely the work of Cree construction companies.
The James Bay Road also opened the region to further mineral exploration and clear-cut logging in the southern James Bay area and substantially reduced the cost of transport. These activities have put further strains on the traditional hunting and trapping activities of the Cree in the southern James Bay region, notably the villages of Waskaganish and Nemaska. Such activities, however, only accounted for about half the economic activity of the Cree communities in 1970 and less than 20% by the late 1990s. Hunting and fishing in the Cree villages mostly involves young adults and older Cree with few professional qualifications. Such activities are furthermore sustained by an income replacement program financed by the government of Quebec that offers the equivalent of a modest annual salary for hunters and their families who live in the bush for at least several weeks of the year.
See also
Baie-James
Great Recycling and Northern Development Canal
James Bay Cree hydroelectric conflict
List of generating stations in Quebec
Quebec – New England Transmission
Robert A. Boyd
Site C dam
Kinzua dam
Alta controversy
Environmental racism
References
Further reading
Books
.
.
.
(also available in English, under the title Hydro-Québec After 100 Years of Electricity)
.
.
.
.
Journals
Hydro-Québec publications
.
Official reports
External links
The La Grande Complex and commission dates (Hydro-Québec)
Hydro-Québec Transmission lines
Human Environment of the James Bay region (detailed map of the James Bay region)
James Bay Municipality (English, French)
Grand Council of the Crees (of Quebec) (English, French, Cree)
Société d'énergie de la Baie-James (English, French)
Public Information Office on the Rupert River Diversion Project (English, French, Cree)
Hydro-Québec
First Nations history in Quebec
1974 establishments in Quebec | James Bay Project | [
"Engineering"
] | 7,106 | [
"James Bay Project",
"Macro-engineering"
] |
782,146 | https://en.wikipedia.org/wiki/Automorphic%20form | In harmonic analysis and number theory, an automorphic form is a well-behaved function from a topological group G to the complex numbers (or complex vector space) which is invariant under the action of a discrete subgroup of the topological group. Automorphic forms are a generalization of the idea of periodic functions in Euclidean space to general topological groups.
Modular forms are holomorphic automorphic forms defined over the groups SL(2, R) or PSL(2, R) with the discrete subgroup being the modular group, or one of its congruence subgroups; in this sense the theory of automorphic forms is an extension of the theory of modular forms. More generally, one can use the adelic approach as a way of dealing with the whole family of congruence subgroups at once. From this point of view, an automorphic form over the group G(AF), for an algebraic group G and an algebraic number field F, is a complex-valued function on G(AF) that is left invariant under G(F) and satisfies certain smoothness and growth conditions.
Henri Poincaré first discovered automorphic forms as generalizations of trigonometric and elliptic functions. Through the Langlands conjectures, automorphic forms play an important role in modern number theory.
Definition
In mathematics, the notion of factor of automorphy arises for a group acting on a complex-analytic manifold. Suppose a group acts on a complex-analytic manifold . Then, also acts on the space of holomorphic functions from to the complex numbers. A function is termed an automorphic form if the following holds:
where is an everywhere nonzero holomorphic function. Equivalently, an automorphic form is a function whose divisor is invariant under the action of .
The factor of automorphy for the automorphic form is the function . An automorphic function is an automorphic form for which is the identity.
An automorphic form is a function F on G (with values in some fixed finite-dimensional vector space V, in the vector-valued case), subject to three kinds of conditions:
to transform under translation by elements according to the given factor of automorphy j;
to be an eigenfunction of certain Casimir operators on G; and
to satisfy a "moderate growth" asymptotic condition a height function.
It is the first of these that makes F automorphic, that is, satisfy an interesting functional equation relating F(g) with F(γg) for . In the vector-valued case the specification can involve a finite-dimensional group representation ρ acting on the components to 'twist' them. The Casimir operator condition says that some Laplacians have F as eigenfunction; this ensures that F has excellent analytic properties, but whether it is actually a complex-analytic function depends on the particular case. The third condition is to handle the case where G/Γ is not compact but has cusps.
The formulation requires the general notion of factor of automorphy j for Γ, which is a type of 1-cocycle in the language of group cohomology. The values of j may be complex numbers, or in fact complex square matrices, corresponding to the possibility of vector-valued automorphic forms. The cocycle condition imposed on the factor of automorphy is something that can be routinely checked, when j is derived from a Jacobian matrix, by means of the chain rule.
A more straightforward but technically advanced definition using class field theory, constructs automorphic forms and their correspondent functions as embeddings of Galois groups to their underlying global field extensions. In this formulation, automorphic forms are certain finite invariants, mapping from the idele class group under the Artin reciprocity law. Herein, the analytical structure of its L-function allows for generalizations with various algebro-geometric properties; and the resultant Langlands program. To oversimplify, automorphic forms in this general perspective, are analytic functionals quantifying the invariance of number fields in a most abstract sense, therefore indicating the 'primitivity' of their fundamental structure. Allowing a powerful mathematical tool for analyzing the invariant constructs of virtually any numerical structure.
Examples of automorphic forms in an explicit unabstracted state are difficult to obtain, though some have directly analytical properties:
- The Eisenstein series (which is a prototypical modular form) over certain field extensions as Abelian groups.
- Specific generalizations of Dirichlet L-functions as class field-theoretic objects.
- Generally any harmonic analytic object as a functor over Galois groups which is invariant on its ideal class group (or idele).
As a general principle, automorphic forms can be thought of as analytic functions on abstract structures, which are invariant with respect to a generalized analogue of their prime ideal (or an abstracted irreducible fundamental representation). As mentioned, automorphic functions can be seen as generalizations of modular forms (as therefore elliptic curves), constructed by some zeta function analogue on an automorphic structure. In the simplest sense, automorphic forms are modular forms defined on general Lie groups; because of their symmetry properties. Therefore, in simpler terms, a general function which analyzes the invariance of a structure with respect to its prime 'morphology'.
History
Before this very general setting was proposed (around 1960), there had already been substantial developments of automorphic forms other than modular forms. The case of Γ a Fuchsian group had already received attention before 1900 (see below). The Hilbert modular forms (also called Hilbert-Blumenthal forms) were proposed not long after that, though a full theory was long in coming. The Siegel modular forms, for which G is a symplectic group, arose naturally from considering moduli spaces and theta functions. The post-war interest in several complex variables made it natural to pursue the idea of automorphic form in the cases where the forms are indeed complex-analytic. Much work was done, in particular by Ilya Piatetski-Shapiro, in the years around 1960, in creating such a theory. The theory of the Selberg trace formula, as applied by others, showed the considerable depth of the theory. Robert Langlands showed how (in generality, many particular cases being known) the Riemann–Roch theorem could be applied to the calculation of dimensions of automorphic forms; this is a kind of post hoc check on the validity of the notion. He also produced the general theory of Eisenstein series, which corresponds to what in spectral theory terms would be the 'continuous spectrum' for this problem, leaving the cusp form or discrete part to investigate. From the point of view of number theory, the cusp forms had been recognised, since Srinivasa Ramanujan, as the heart of the matter.
Automorphic representations
The subsequent notion of an "automorphic representation" has proved of great technical value when dealing with G an algebraic group, treated as an adelic algebraic group. It does not completely include the automorphic form idea introduced above, in that the adelic approach is a way of dealing with the whole family of congruence subgroups at once. Inside an L2 space for a quotient of the adelic form of G, an automorphic representation is a representation that is an infinite tensor product of representations of p-adic groups, with specific enveloping algebra representations for the infinite prime(s). One way to express the shift in emphasis is that the Hecke operators are here in effect put on the same level as the Casimir operators; which is natural from the point of view of functional analysis, though not so obviously for the number theory. It is this concept that is basic to the formulation of the Langlands philosophy.
Poincaré on discovery and his work on automorphic functions
One of Poincaré's first discoveries in mathematics, dating to the 1880s, was automorphic forms. He named them Fuchsian functions, after the mathematician Lazarus Fuchs, because Fuchs was known for being a good teacher and had researched on differential equations and the theory of functions. Poincaré actually developed the concept of these functions as part of his doctoral thesis. Under Poincaré's definition, an automorphic function is one which is analytic in its domain and is invariant under a discrete infinite group of linear fractional transformations. Automorphic functions then generalize both trigonometric and elliptic functions.
Poincaré explains how he discovered Fuchsian functions:
See also
Automorphic factor
Automorphic function
Maass cusp form
Automorphic Forms on GL(2), a book by H. Jacquet and Robert Langlands
Jacobi form
Notes
References
Henryk Iwaniec, Spectral Methods of Automorphic Forms, Second Edition, (2002) (Volume 53 in Graduate Studies in Mathematics), American Mathematical Society, Providence, RI
Daniel Bump, "Automorphic Forms and Representations", 1998, Cambridge University Press
Stephen Gelbart (1975), "Automorphic forms on Adele groups",
External links
Lie groups | Automorphic form | [
"Mathematics"
] | 1,864 | [
"Lie groups",
"Mathematical structures",
"Algebraic structures"
] |
782,162 | https://en.wikipedia.org/wiki/Cone%20%28topology%29 | In topology, especially algebraic topology, the cone of a topological space is intuitively obtained by stretching X into a cylinder and then collapsing one of its end faces to a point. The cone of X is denoted by or by .
Definitions
Formally, the cone of X is defined as:
where is a point (called the vertex of the cone) and is the projection to that point. In other words, it is the result of attaching the cylinder by its face to a point along the projection .
If is a non-empty compact subspace of Euclidean space, the cone on is homeomorphic to the union of segments from to any fixed point such that these segments intersect only in itself. That is, the topological cone agrees with the geometric cone for compact spaces when the latter is defined. However, the topological cone construction is more general.
The cone is a special case of a join: the join of with a single point .
Examples
Here we often use a geometric cone ( where is a non-empty compact subspace of Euclidean space). The considered spaces are compact, so we get the same result up to homeomorphism.
The cone over a point p of the real line is a line-segment in , .
The cone over two points {0, 1} is a "V" shape with endpoints at {0} and {1}.
The cone over a closed interval I of the real line is a filled-in triangle (with one of the edges being I), otherwise known as a 2-simplex (see the final example).
The cone over a polygon P is a pyramid with base P.
The cone over a disk is the solid cone of classical geometry (hence the concept's name).
The cone over a circle given by
is the curved surface of the solid cone:
This in turn is homeomorphic to the closed disc.
More general examples:
The cone over an n-sphere is homeomorphic to the closed (n + 1)-ball.
The cone over an n-ball is also homeomorphic to the closed (n + 1)-ball.
The cone over an n-simplex is an (n + 1)-simplex.
Properties
All cones are path-connected since every point can be connected to the vertex point. Furthermore, every cone is contractible to the vertex point by the homotopy
.
The cone is used in algebraic topology precisely because it embeds a space as a subspace of a contractible space.
When X is compact and Hausdorff (essentially, when X can be embedded in Euclidean space), then the cone can be visualized as the collection of lines joining every point of X to a single point. However, this picture fails when X is not compact or not Hausdorff, as generally the quotient topology on will be finer than the set of lines joining X to a point.
Cone functor
The map induces a functor on the category of topological spaces Top. If is a continuous map, then is defined by
,
where square brackets denote equivalence classes.
Reduced cone
If is a pointed space, there is a related construction, the reduced cone, given by
where we take the basepoint of the reduced cone to be the equivalence class of . With this definition, the natural inclusion becomes a based map. This construction also gives a functor, from the category of pointed spaces to itself.
See also
Cone (disambiguation)
Suspension (topology)
Desuspension
Mapping cone (topology)
Join (topology)
References
Allen Hatcher, Algebraic topology. Cambridge University Press, Cambridge, 2002. xii+544 pp. and
Topology
Algebraic topology | Cone (topology) | [
"Physics",
"Mathematics"
] | 741 | [
"Algebraic topology",
"Fields of abstract algebra",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
23,635,894 | https://en.wikipedia.org/wiki/Non-autonomous%20mechanics | Non-autonomous mechanics describe non-relativistic mechanical systems subject to time-dependent transformations. In particular, this is the case of mechanical systems whose Lagrangians and Hamiltonians depend on the time. The configuration space of non-autonomous mechanics is a fiber bundle over the time axis coordinated by .
This bundle is trivial, but its different trivializations correspond to the choice of different non-relativistic reference frames. Such a reference frame also is represented by a connection
on which takes a form with respect to this trivialization. The corresponding covariant differential
determines the relative velocity with respect to a reference frame .
As a consequence, non-autonomous mechanics (in particular, non-autonomous Hamiltonian mechanics) can be formulated as a covariant classical field theory (in particular covariant Hamiltonian field theory) on . Accordingly, the velocity phase space of non-autonomous mechanics is the jet manifold of provided with the coordinates . Its momentum phase space is the vertical cotangent bundle of coordinated by and endowed with the canonical Poisson structure. The dynamics of Hamiltonian non-autonomous mechanics is defined by a Hamiltonian form .
One can associate to any Hamiltonian non-autonomous system an equivalent Hamiltonian autonomous system on the cotangent bundle of coordinated by and provided with the canonical symplectic form; its Hamiltonian is .
See also
Analytical mechanics
Non-autonomous system (mathematics)
Hamiltonian mechanics
Symplectic manifold
Covariant Hamiltonian field theory
Free motion equation
Relativistic system (mathematics)
References
De Leon, M., Rodrigues, P., Methods of Differential Geometry in Analytical Mechanics (North Holland, 1989).
Echeverria Enriquez, A., Munoz Lecanda, M., Roman Roy, N., Geometrical setting of time-dependent regular systems. Alternative models, Rev. Math. Phys. 3 (1991) 301.
Carinena, J., Fernandez-Nunez, J., Geometric theory of time-dependent singular Lagrangians, Fortschr. Phys., 41 (1993) 517.
Mangiarotti, L., Sardanashvily, G., Gauge Mechanics (World Scientific, 1998) .
Giachetta, G., Mangiarotti, L., Sardanashvily, G., Geometric Formulation of Classical and Quantum Mechanics (World Scientific, 2010) ().
Classical mechanics
Hamiltonian mechanics
Symplectic geometry | Non-autonomous mechanics | [
"Physics",
"Mathematics"
] | 506 | [
"Theoretical physics",
"Classical mechanics stubs",
"Classical mechanics",
"Hamiltonian mechanics",
"Mechanics",
"Theoretical physics stubs",
"Dynamical systems"
] |
15,676,423 | https://en.wikipedia.org/wiki/Keating%20model | In physics, The Keating Model is a model that theoretical physicist Patrick N. Keating introduced in 1966 to describe forces induced on neighboring atoms when one atom moves in a solid.
The term most often applies to the forces on first- and second-nearest neighboring atoms that arise when an atom is moved in tetrahedrally-bonded solids, such as diamond, silicon, germanium, and a number of other covalent crystals with the diamond or zinc blende structures.
Crystalline solids generally consist of an ordered array of interconnected atoms, generated by repetition of a unit cell in three dimensions, and are of two extreme types—ionic crystals, and covalent crystals. Others are intermediate: partly ionic and partly covalent. Ionic crystals are made up of quite different ions, such as Na+ and Cl− in common salt, for example, while covalent crystals such as diamond are made up of atoms that share electrons in a covalent bond.
In either case, attractive and repulsive forces resist moving an atom/ion or a set of them from their equilibrium positions, thus giving solids their rigidity against compressive, tensile, and shear stresses. The nature and strength of these forces is important for the scientific understanding of solids since they determine the way the solid responds to these stresses (elastic constants), the velocity of sound waves in it, its infra-red absorption, and many other properties.
Description
The Keating model is the result of a general method proposed to ensure that the elastic strain energy satisfies the requirement that it is invariant under a simple rotation of the crystal, without deformation. It is a formalism for the way adjacent and close-by atoms respond when one or more atoms move in covalently bonded crystals. It is also a specific parameterization of this response for diamond, silicon, and germanium. (see the article listed under "Further Reading").
The general method is applicable for small atomic displacements to all crystal structures. It has been extended by P. N. Keating to include anharmonic effects (and calculate third-order elastic constants), and many other researchers have extended it to include forces between the covalent bonds, and augment it in other ways.
The key paper that introduced the model was one of the 50 highest-impact papers over a century of Physical Review publications ). The model has been, and is, used by many research scientists for calculating elastic constants, lattice dynamics, band structure, dislocation strains, atomic configurations at surfaces and interfaces, and other purposes for a wide range of solids, including amorphous (i.e., non-crystalline) materials.
References
Further reading
Chemical bonding | Keating model | [
"Physics",
"Chemistry",
"Materials_science"
] | 542 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
15,678,964 | https://en.wikipedia.org/wiki/Carbonyl%20metallurgy | Carbonyl metallurgy is used to manufacture products of iron, nickel, steel, and other metals. Coatings are produced by vapor plating using metal carbonyl vapors. These are metal-ligand complexes where carbon monoxide is bonded to individual atoms of metals .
Iron carbonyl is stable as iron pentacarbonyl, where five carbon monoxide molecules are pendently bonded to the iron atom, while nickel carbonyl is stable as nickel tetracarbonyl, which has four carbon monoxide molecules pendantly bonded to the nickel atom. Both can be formed by the exposure of the powdered metal to carbon monoxide gas at temperatures of around 75 degrees Celsius. Both the metal carbonyls decompose near 175 °C, resulting in a vapor plated metallic coating. The thickness of the vapor plated deposit can be increased to desired thicknesses by controlling the amount of metal carbonyl used and the duration of the plating process.
Vale Inco produces over 100 million pounds (ca. 45000 tonnes) of nickel metal annually by the carbonyl process. The carbonyl process has been used to produce molds in custom shapes for industry. Such molds have been used in plastic molding and other manufacturing techniques.
William Jenkin developed many of the techniques and procedures used in carbonyl metallurgy.
Carbonyl metallurgy is useful as a low-temperature metal coating technique that may find many applications in the future.
See also
Chemical vapor deposition
Further reading
Iron recovery and steel manufacture using carbonyl chemistry - http://www.space-mining.com/IRONRECOVERY.htm (archived: https://web.archive.org/web/20120210151936/http://www.space-mining.com/IRONRECOVERY.htm)
Beneficiation of asteroidal iron by carbonyl metallurgy - http://www.space-mining.com/beneficiation.html (archived: https://web.archive.org/web/20120210152024/http://www.space-mining.com/beneficiation.html)
William Jenkin inventor of numerous carbonyl processes - https://web.archive.org/web/20080511182043/http://www.wcjenkin.com/
Preparation of Iron Carbonyl -
Preparation of metallic shaped bodies - http://www.patentstorm.us/patents/5802437-claims.html
Chemical processes
Coatings
Metallurgy | Carbonyl metallurgy | [
"Chemistry",
"Materials_science",
"Engineering"
] | 529 | [
"Metallurgy",
"Coatings",
"Materials science",
"Chemical processes",
"nan",
"Chemical process engineering"
] |
15,680,284 | https://en.wikipedia.org/wiki/Commissioning%20%28construction%29 | In construction, commissioning or commissioning process (often abbreviated Cx) is an integrated, systematic process to ensure, that all building systems perform interactively according to the "Design Intent", through documented verification. The commissioning process establishes and documents the "Owner's Project Requirements (OPR)" criteria for system function, performance expectations, maintainability; verify and document compliance with these criteria throughout all phases of the project (design, manufacturing, installation, construction, startup, testing, and operations). Commissioning procedures require a collaborative team effort and 'should' begin during the pre-design or planning phase of the project, through the design and construction phases, initial occupancy phase, training of operations and maintenance (O&M) staff, and into occupancy (for warranty and future re-commissioning).
Historically, “commissioning” as referenced in building design and construction, referred to the process by which the heating, ventilation, and air conditioning (HVAC) systems of a building were tested and balanced according to established standards prior to the Owner's acceptance. HVAC commissioning, historically, didn't include other, interactive, supporting, or supplemental building systems that did not directly affect the performance of the HVAC systems.
In 2005, the U.S. General Services Administration (GSA) published The Building Commissioning Guide. The guide provides a process for including building commissioning in the planning, design, construction and post-construction phases of a project.
Through energy and water conservation, occupant comfort, life-safety, systems criticality, and technology improvements of building systems became more in demand, and expanded the Owner's performance and technical capability expectation. The need to improve, integrate, and commission other (and more) systems expanded the scope of Building Commissioning. In modern facilities, buildings, and systems many of the systems are integrated (directly or indirectly) in operation, affect, need for proper operation, function, control, and sequencing. This can become very complex, and provide many points of sub-optimal operation, or failure, with all the many systems requiring, or affecting, interaction of each other.
For example, power sources (utility, generation, battery/cell) control and monitoring, air movement control, smoke control, fire suppression, fire alarm, security door egress/evacuation control, elevator control, space containment/infiltration, staging and sequencing of every interacting system, its sub-system, equipment, and components each operating and interacting correctly in every operating Mode (normal, startup, shutdown, maintenance, economy, emergency, etc.).
This list can go well beyond this example, even in the most basic, typical, facility today. As more building systems are integrated, a deficiency in one component can result in sub-optimal operation and performance among other components and systems. Through system testing and "integrated systems testing" (IST) verification of all interrelationships, effects, modes of operation, and performance can be verified and documented to comply with the 'Owner's Project Requirements' and Architect/Engineers documented 'Design Intent' performance.
Thus, 'Whole Building Commissioning' (or 'Total Building Commissioning') is the accepted normal/standard, certainly for government and critical facility Owners, but also for conservation and efficiencies to provide a fully verified operational facility. Partial building commissioning (commissioning only specific equipment, functions, systems) is also still utilized, but the interrelations of many automated systems, as designed, today branch and spider throughout many other systems within even basic buildings. The Owners Project Requirements and the Architect/Engineers design should clearly identify the scope and expectations of commissioning.
Definition
The following descriptions of the different types of commissioning comes from California Commissioning Collaborative (CCC), Guide for New Buildings (2006).
"The term commissioning comes from shipbuilding. A commissioned ship is one deemed ready for service. Before being awarded this title, however, a ship must pass several milestones. Equipment is installed and tested, problems are identified and corrected, and the prospective crew is extensively trained. A commissioned ship is one whose materials, systems, and staff have successfully completed a thorough quality assurance process.
Building commissioning takes the same approach to new buildings. When a building is initially commissioned it undergoes an intensive quality assurance process that begins during design and continues through construction, occupancy, and operations. Commissioning ensures that the new building operates initially as the owner intended and that building staff are prepared to operate and maintain its systems and equipment.
Retro-commissioning is the application of the commissioning process to existing buildings. Retro-commissioning is a process that seeks to improve how building equipment and systems function together. Depending on the age of the building, retro-commissioning can often resolve problems that occurred during design or construction, or address problems that have developed throughout the building's life. In all, retro-commissioning improves a building's operations and maintenance (O&M) procedures to enhance overall building performance.
Re-Commissioning is another type of commissioning that occurs when a building that has already been commissioned undergoes another commissioning process. The decision to recommission may be triggered by a change in building use or ownership, the onset of operational problems, or some other need. Ideally, a plan for recommissioning is established as part of a new building's original commissioning process or an existing building's retro-commissioning process."
In practice
While the practice of Building Commissioning is still fairly new (40 years) in the building construction industry, it has become more (not completely) industry standard practice as building owners require higher expectations of performance and return of investment. The Commissioning process inherently, and through design, improves the quality of the project from initial planning/design through construction and occupancy.
Building Commissioning is typically led by a recognized professional experienced in commissioning who oversees, leads, and serves as the ultimately responsible individual for the management of the process/program and deliverable in the representation of, or contracted to the Owner/Developer. Typically identified as the "Commissioning Provider" (CxP). The "Commissioning Provider" may be a member of the Owner, Engineer, Construction/Project Manager, Contractor, or independent Third Party. The industry standard most recognized and recommended arrangement is for the CxP to be an Independent Third Party Commissioning Provider (also acronym "CxP") contracted directly to the Owner. This facilitates a more unbiased performance in the representation of the Owner. The "Commissioning Provider" may have subordinates/peers who participate directly in his/her oversight and commissioning execution/documentation team. This team is typically identified as the Commissioning Provider's Team. Of course, there are many variations to this as well. In many cases and ideally, there is an ongoing building enhancing and commissioning program and team for the life of the building. Building commissioning is a quality-focused process necessary for both non-complex and complex modern construction projects.
The breadth of the industry, services, benefits, requirements, documentation, etc. is endless. Although the basic Cx process, phases, and steps are fairly standard, the variations of project scopes, expectations, special designs; individual intricacies of every differing design, manufacture, interaction; use of every differing design, assembly, component, power, control system, control method, equipment, and system available to the designer, and manufacturers; make every single project's systems and Commissioning Program complex and original.
Service method
While the service method can vary from owner to owner and project to project, the basic formula for a successful building commissioning process involves a synergy team from pre-design to develop the owner's project requirements (OPR), commissioning scope, and plan including benchmarks for success, review of design documents and checklists for achieving the OPR, development of checklists and verifying a sample of construction checklists and submittals, developing training needs and evaluating training delivered by the contractors, witnessing and verifying construction phase tests, and periodic site observations during the construction phase, and performing commissioning functional testing as the project nears completion.
Commissioning Provider (CxP)
The commissioning provider (CxP) is generally (and preferably) contracted directly to the building owner to ensure unbiased performance of the CxP. The CxP may be a subcontractor (or employee) of the building owner, architect, or design engineer.
It is recommended that the CxP be contracted early in the project planning stages included in pre-design and design charrettes, and maintained throughout the design, construction, and final acceptance of the project at a minimum. Having the CxP on the team early provides opportunity to identify possible operation, installation, testing, and performance issues long before they become a construction issue. The CxP works closely with the owner's representative, building/facility operating engineer, architect, design engineer, general contractor, and all trade subcontractors. The CxP typically is responsible for leading and managing the project commissioning process (design and/or construction) and works closely with the design, construction, and operation teams in a co-operative work environment that focuses on teamwork throughout the building's design, construction, and post construction.
A CxP's ability to add value to a project is rooted in their ability to create positive working relationships with all parties involved and not pointing fingers when issues arise. It is important that the CxP clearly identifies the communication processes/streams, the project goals and expectations (from the OPR), and the team member responsibilities. A CxP has to be able to give open constructive criticism while also being able to listen attentively. The CxP's primary goal is to provide a completed and properly operating product to the building owner and occupant/user.
The CxP's work and performance of service is equally or primarily in the background performing design, submittal, O&M Manual reviews and development of testing and commissioning processes for the project, as well as documenting the commissioning efforts. The CxP attends design and construction meetings, performs site construction observations, observes factory equipment testing, directs and observes functional performance testing of systems and equipment. The CxP typically does not actually perform the hands-on testing, as these are actually performed by the manufacturer, vendor, or trade contractors, and directed and observed by the CxP utilizing testing procedures and expected performance outcome previously identified by the CxP during the commissioning document development process.
The CxP typically prepares a commissioning specification and commissioning plan during the project design phase. The design engineer also may develop the commissioning specification (and rarely the commissioning plan) in situations where the CxP has not been so contracted, or brought into the design team during the design process. The commissioning plan is a live document that outlines the commissioning processes and expectation based on the Owner's OPR, the design engineer's basis of design (BOD) and the project construction document (drawings and specifications). The commissioning plan is modified as the commissioning process progresses throughout the design, construction, and final acceptance of the facility. The functional performance test procedures are typically developed by the CxP with assistance of the trade contractors, vendors, and manufacturers based on the design engineer's contract documents. These same parties and the design engineer, and owner's representative (typically the facility operating engineer) review the functional performance test procedures and expected outcomes prior to testing. The systems, equipment, items, processes, modes, and sequences of operations to be tested by the CxP (contractors or others) should be detailed and identified in the design engineer's construction documents (drawings and specifications), the construction request for proposal (RFP), the contractors' bid submission, the commissioning specifications, the commissioning plan, and the contractor's submittals. Of utmost importance, often neglected by contractors, are the equipment / systems "installation and operations manuals" (IOM or IO&M) "specific to the project" (not generic). The IOM's along with complete, and very detailed, sequence of operations (SOO) and control drawings/documents submittal "specific to the project" (not generic) are of utmost importance to the CxP to perform the review and develop proper testing procedures. Timely delivery of these documents to the CxP is important to facilitate the CxP ample time to review, develop test, obtain reviews, and implement changes prior to scheduling of any testing.
To provide any benefit, the facility, systems, and equipment must be thoroughly designed, submitted to, and approved by a responsible, thorough, professional architectural and engineering design team to function correctly. The design team incorporates the documented owner's program of requirements (OPR) which identifies the owner's systems, equipment, materials, control, and performance expectations. The design team identifies and documents the project basis of design (BOD) which specifically identifies the OPR items, how each was implemented in the design (or modified), and the final design basis for systems, equipment, materials, control, and performance expectations.
The fast-track nature of the design and construction process (experience in 2011) often leads to missed planning, design, and even construction items. Items missed during the design and construction process can often be identified by the CxP during development of the functional and performance test procedures or during functional and performance tests.
The commissioning team, led by the CxP, has a primary objective of verifying proper installation, operation, and performance based on the project design (BOD) and the OPR. The commissioning of the facility, systems, and / or equipment provides verification, identifies issues and discrepancies, and if designed and constructed properly, ultimately enhances the facility total quality, control, performance, and efficiency which in turn provides increased sustainability.
Building Automation Systems (BAS)
Building management systems (BMS) or building automation systems (BAS) provide control of the building systems. These typically include heating, ventilating, air-conditioning and refrigeration (HVAC/R), electrical power, lighting, fire suppression and alarm, and security systems, etc. Building controls also have the ability to monitor and control systems to improve performance, conserve energy, conserve water, and control lighting. The greater control provides the ability to improve a buildings performance, environmental impact, and the user / occupant's environment. Direct digital controls (DDC) with real time monitoring and history provide the ability to acquire system data real time or with trend-logging, or trending, (over a predetermined period of time) to observe performance, issues / troubles, and identify possible improvements to operations and maintenance.
Building systems and equipment (HVAC, electrical, etc.) operate via the control systems (BAS, BMS, and similar) based on a designed sequence of operations (SOO) typically developed by the design engineer (specification) and modified during the submittal process by the trade contractors (and reviewed and approved by the design engineer). This SOO is also reviewed by the CxP who utilizes the SOO to develop the functional performance test procedures. The functional performance test procedures are typically developed by the CxP with assistance of the trade contractors, vendors, and manufacturers, reviewed by same, and the design engineer. The systems, equipment, items, processes, modes, and sequences of operations to be tested by the CxP (contractors or others) should be detailed and identified in the design engineer's construction documents (drawings and specifications), the construction request for proposal (RFP), the contractors' bid submission, and the commissioning specifications and commissioning plan. The commissioning specification and commissioning plan are typically developed by the CxP during the design phase of the project.
The CxP works closely with the controls contractor to verify the control programming and identifies corrective issues during reviews and the functional performance testing. By performing the functional performance testing it is often, if not always, found where there are deficiencies in the systems or control and identifies items for improvement. Each and every point and sequence is typically not required to be tested by the CxP. The contractors typically hold the responsibility of testing and verifying each and every point and sequence, and the CxP performs a test of a sample of the items after the contractors have tested, repaired and verified. Re-testing of the same, or another sampling, by the CxP is often required to re-verify deficiencies identified during the initial testing.
System degradation
It is estimated by Texas A&M researchers that as much as 20% of the energy used in an average commercial building is waste associated with poorly operated systems.
Buildings systems under-perform for several reasons:
They were never properly configured
The design did not account for all sources of building efficiency
The building is not properly maintained
The use of the building has changed over time
Types of building commissioning
New Construction Building Commissioning
Existing Building Commissioning:
Retro-Commissioning
Re-Commissioning: Recommissioning is the methodical process of testing and adjusting the aforementioned systems in existing buildings.
Ongoing (or Continuous) Building Commissioning
Refer to "Commissioning Process Overview" and "Retro-Commissioning Process Overview" for a diagram of very basic activities included in each phase of the Commissioning processes.
Common commissioned building systems
It is common to include HVAC and BMS systems in the commissioning process. Often the installations for fire safety, lighting controls, plumbing, electrical distribution, and of more recent years, the building enclosure, may also be included within the scope of the commissioning process.
Commissioning costs and savings
The payback time for the commissioning process is based on many factors including saved/minimized energy usage, better design and fewer errors.
"Building Commissioning Costs and Savings Across Three Decades and 1,500 North American Buildings" states that the simple payback time for commissioning on new construction projects is 4.2 years.
A Danish Master Thesis made over two identical construction projects (shopping centers) where commissioning was used on one the construction projects showed, that the construction project with commissioning used 42% less electrical energy in the operation phase.
See also
References
Sources
https://web.archive.org/web/20100125125628/http://www.bCxP.org/resources/pubs/index.htm
Building Commissioning Association (BCxA) - New Construction Building Commissioning Best Practicies, 2016
National Institute of Building Sciences - NIBS Guideline 3-2012, Building Enclosure Commissioning Process BECx
Natural Resources Canada (NRCan), Commissioning Guide for New Buildings (2010)
ASHRAE Guideline 0-2019
ASHRAE Standard 202-2018
A Practical Guide to the Commissioning Process, Thomas T. Jarløv, 2021
International Code Council, ICC G4-2018 Guideline for Commissioning
External links
Building Commissioning Association
AABC Commissioning Group (ACG)
Eastern Canadian Chapter
National Environmental Balancing Bureau
Building engineering
Building automation | Commissioning (construction) | [
"Engineering"
] | 3,842 | [
"Building engineering",
"Automation",
"Civil engineering",
"Building automation",
"Architecture"
] |
12,830,014 | https://en.wikipedia.org/wiki/Eight-point%20algorithm | The eight-point algorithm is an algorithm used in computer vision to estimate the essential matrix or the fundamental matrix related to a stereo camera pair from a set of corresponding image points. It was introduced by Christopher Longuet-Higgins in 1981 for the case of the essential matrix. In theory, this algorithm can be used also for the fundamental matrix, but in practice the normalized eight-point algorithm, described by Richard Hartley in 1997, is better suited for this case.
The algorithm's name derives from the fact that it estimates the essential matrix or the fundamental matrix from a set of eight (or more) corresponding image points. However, variations of the algorithm can be used for fewer than eight points.
Coplanarity constraint
One may express the epipolar geometry of two cameras and a point in space with an algebraic equation. Observe that, no matter where the point is in space, the vectors , and belong to the same plane. Call the coordinates of point in the left eye's reference frame and call the coordinates of in the right eye's reference frame and call the rotation and translation between the two reference frames s.t. is the relationship between the coordinates of in the two reference frames. The following equation always holds because the vector generated from is orthogonal to both and :
Because , we get
.
Replacing with , we get
Observe that may be thought of as a matrix; Longuet-Higgins used the symbol to denote it. The product is often called essential matrix and denoted with .
The vectors are parallel to the vectors and therefore the coplanarity constraint holds if we substitute these vectors. If we call the coordinates of the projections of onto the left and right image planes, then the coplanarity constraint may be written as
Basic algorithm
The basic eight-point algorithm is here described for the case of estimating the essential matrix . It consists of three steps. First, it formulates a homogeneous linear equation, where the solution is directly related to , and then solves the equation, taking into account that it may not have an exact solution. Finally, the internal constraints of the resulting matrix are managed. The first step is described in Longuet-Higgins' paper, the second and third steps are standard approaches in estimation theory.
The constraint defined by the essential matrix is
for corresponding image points represented in normalized image coordinates . The problem which the algorithm solves is to determine for a set of matching image points. In practice, the image coordinates of the image points are affected by noise and the solution may also be over-determined which means that it may not be possible to find which satisfies the above constraint exactly for all points. This issue is addressed in the second step of the algorithm.
Step 1: Formulating a homogeneous linear equation
With
and and
the constraint can also be rewritten as
or
where
and
that is, represents the essential matrix in the form of a 9-dimensional vector and this vector must be orthogonal to the vector which can be seen as a vector representation of the matrix .
Each pair of corresponding image points produces a vector . Given a set of 3D points this corresponds to a set of vectors and all of them must satisfy
for the vector . Given sufficiently many (at least eight) linearly independent vectors it is possible to determine in a straightforward way. Collect all vectors as the columns of a matrix and it must then be the case that
This means that is the solution to a homogeneous linear equation.
Step 2: Solving the equation
A standard approach to solving this equation implies that is a right singular vector of corresponding to a singular value that equals zero. Provided that at least eight linearly independent vectors are used to construct it follows that this singular vector is unique (disregarding scalar multiplication) and, consequently, and then can be determined.
In the case that more than eight corresponding points are used to construct it is possible that it does not have any singular value equal to zero. This case occurs in practice when the image coordinates are affected by various types of noise. A common approach to deal with this situation is to describe it as a total least squares problem; find which minimizes
when . The solution is to choose as the left singular vector corresponding to the smallest singular value of . A reordering of this back into a matrix gives the result of this step, here referred to as .
Step 3: Enforcing the internal constraint
Another consequence of dealing with noisy image coordinates is that the resulting matrix may not satisfy the internal constraint of the essential matrix, that is, two of its singular values are equal and nonzero and the other is zero. Depending on the application, smaller or larger deviations from the internal constraint may or may not be a problem. If it is critical that the estimated matrix satisfies the internal constraints, this can be accomplished by finding the matrix of rank 2 which minimizes
where is the resulting matrix from Step 2 and the Frobenius matrix norm is used. The solution to the problem is given by first computing a singular value decomposition of :
where are orthogonal matrices and is a diagonal matrix which contains the singular values of . In the ideal case, one of the diagonal elements of should be zero, or at least small compared to the other two which should be equal. In any case, set
where are the largest and second largest singular values in respectively. Finally, is given by
The matrix is the resulting estimate of the essential matrix provided by the algorithm.
Normalized algorithm
The basic eight-point algorithm can in principle be used also for estimating the fundamental matrix . The defining constraint for is
where are the homogeneous representations of corresponding image coordinates (not necessary normalized). This means that it is possible to form a matrix in a similar way as for the essential matrix and solve the equation
for which is a reshaped version of . By following the procedure outlined above, it is then possible to determine from a set of eight matching points. In practice, however, the resulting fundamental matrix may not be useful for determining epipolar constraints.
Difficulty
The problem is that the resulting often is ill-conditioned. In theory, should have one singular value equal to zero and the rest are non-zero. In practice, however, some of the non-zero singular values can become small relative to the larger ones. If more than eight corresponding points are used to construct , where the coordinates are only approximately correct, there may not be a well-defined singular value which can be identified as approximately zero. Consequently, the solution of the homogeneous linear system of equations may not be sufficiently accurate to be useful.
Cause
Hartley addressed this estimation problem in his 1997 article. His analysis of the problem shows that the problem is caused by the poor distribution of the homogeneous image coordinates in their space, . A typical homogeneous representation of the 2D image coordinate is
where both lie in the range 0 to 1000–2000 for a modern digital camera. This means that the first two coordinates in vary over a much larger range than the third coordinate. Furthermore, if the image points which are used to construct lie in a relatively small region of the image, for example at , again the vector points in more or less the same direction for all points. As a consequence, will have one large singular value and the remaining are small.
Solution
As a solution to this problem, Hartley proposed that the coordinate system of each of the two images should be transformed, independently, into a new coordinate system according to the following principle.
The origin of the new coordinate system should be centered (have its origin) at the centroid (center of gravity) of the image points. This is accomplished by a translation of the original origin to the new one.
After the translation the coordinates are uniformly scaled so that the mean of distances from the origin to the points equals .
This principle results, normally, in a distinct coordinate transformation for each of the two images. As a result, new homogeneous image coordinates are given by
where are the transformations (translation and scaling) from the old to the new normalized image coordinates. This normalization is only dependent on the image points which are used in a single image and is, in general, distinct from normalized image coordinates produced by a normalized camera.
The epipolar constraint based on the fundamental matrix can now be rewritten as
where . This means that it is possible to use the normalized homogeneous image coordinates to estimate the transformed fundamental matrix using the basic eight-point algorithm described above.
The purpose of the normalization transformations is that the matrix , constructed from the normalized image coordinates, in general, has a better condition number than has. This means that the solution is more well-defined as a solution of the homogeneous equation than is relative to . Once has been determined and reshaped into the latter can be de-normalized to give according to
In general, this estimate of the fundamental matrix is a better one than would have been obtained by estimating from the un-normalized coordinates.
Using fewer than eight points
Each point pair contributes with one constraining equation on the element in . Since has five degrees of freedom it should therefore be sufficient with only five point pairs to determine . David Nister proposed an efficient solution to estimate the essential matrix from set of five paired points, known as the five-point algorithm. Hartley et. al. later proposed a modified and more stable five-point algorithm based on Nister's algorithm.
See also
Essential matrix
Fundamental matrix
Trifocal tensor
References
Further reading
Geometry in computer vision | Eight-point algorithm | [
"Mathematics"
] | 1,896 | [
"Geometry in computer vision",
"Geometry"
] |
12,833,402 | https://en.wikipedia.org/wiki/Formate%20dehydrogenase | Formate dehydrogenases are a set of enzymes that catalyse the oxidation of formate to carbon dioxide, donating the electrons to a second substrate, such as NAD+ in formate:NAD+ oxidoreductase () or to a cytochrome in formate:ferricytochrome-b1 oxidoreductase (). This family of enzymes has attracted attention as inspiration or guidance on methods for the carbon dioxide fixation, relevant to global warming.
Function
NAD-dependent formate dehydrogenases are important in methylotrophic yeast and bacteria, being vital in the catabolism of C1 compounds such as methanol. The cytochrome-dependent enzymes are more important in anaerobic metabolism in prokaryotes. For example, in E. coli, the formate:ferricytochrome-b1 oxidoreductase is an intrinsic membrane protein with two subunits and is involved in anaerobic nitrate respiration.
NAD-dependent reaction
Formate + NAD+ CO2 + NADH + H+
Cytochrome-dependent reaction
Formate + 2 ferricytochrome b1 CO2 + 2 ferrocytochrome b1 + 2 H+
Molybdopterin, molybdenum and selenium dependence
The metal-dependent Fdh's feature Mo or W at their active sites. These active sites resemble the motif seen in DMSO reductase, with two molybdopterin cofactors bound to Mo/W in a bidentate fashion. The fifth and sixth ligands are sulfide and either cysteinate or selenocysteinate.
The mechanism of action appears to involve 2e redox of the metal centers, induced by hydride transfer from formate and release of carbon dioxide:
In this scheme, represents the four thiolate-like ligands provided by the two dithiolene cofactors, the molybdopterins. The dithiolene and cysteinyl/selenocysteinyl ligands are redox-innocent. In terms of the molecular details, the mechanism remains uncertain, despite numerous investigations. Most mechanisms assume that formate does not coordinate to Mo/W, in contrast to typical Mo/W oxo-transferases (e.g., DMSO reductase). A popular mechanistic proposal entails transfer of H− from formate to the Mo/WVI=S group.
Transmembrane domain
Formate dehydrogenase consists of two transmembrane domains; three α-helices of the β-subunit and four transmembrane helices from the gamma-subunit.
The β-subunit of formate dehydrogenase is present in the periplasm with a single transmembrane α-helix spanning the membrane by anchoring the β-subunit to the inner-membrane surface. The β-subunit has two subdomains, where each subdomain has two [4Fe-4S] ferredoxin clusters. The judicious alignment of the [4Fe-4S] clusters in a chain through the subunit have low separation distances, which allow rapid electron flow through [4Fe-4S]-1, [4Fe-4S]-4, [4Fe-4S]-2, and [4Fe-4S]-3 to the periplasmic heme b in the γ-subunit. The electron flow is then directed across the membrane to a cytoplasmic heme b in the γ-subunit .
The γ-subunit of formate dehydrogenase is a membrane-bound cytochrome b consisting of four transmembrane helices and two heme b groups which produce a four-helix bundle which aids in heme binding. The heme b cofactors bound to the gamma subunit allow for the hopping of electrons through the subunit. The transmembrane helices maintain both heme b groups, while only three provide the heme ligands thereby anchoring Fe-heme. The periplasmic heme b group accepts electrons from [4Fe-4S]-3 clusters of the β-subunit’s periplasmic domain. The cytoplasmic heme b group accepts electrons from the periplasmic heme b group, where electron flow is then directed towards the menaquinone (vitamin K) reduction site, present in the transmembrane domain of the gamma subunit. The menaquinone reduction site in the γ-subunit, accepts electrons through the binding of a histidine ligand of the cytoplasmic heme b.
See also
Formate dehydrogenase (cytochrome)
Formate dehydrogenase (cytochrome-c-553)
Formate dehydrogenase (NADP+)
Microbial metabolism
Additional reading
References
External links
ENZYME link for EC 1.2.2.1
ENZYME link for EC 1.2.1.2
Cellular respiration
Metabolism
EC 1.2.2
EC 1.2.1 | Formate dehydrogenase | [
"Chemistry",
"Biology"
] | 1,062 | [
"Biochemistry",
"Cellular respiration",
"Metabolism",
"Cellular processes"
] |
12,835,538 | https://en.wikipedia.org/wiki/Arieh%20Warshel | Arieh Warshel (; born November 20, 1940) is an Israeli-American biochemist and biophysicist. He is a pioneer in computational studies on functional properties of biological molecules, Distinguished Professor of Chemistry and Biochemistry, and holds the Dana and David Dornsife Chair in Chemistry at the University of Southern California. He received the 2013 Nobel Prize in Chemistry, together with Michael Levitt and Martin Karplus for "the development of multiscale models for complex chemical systems".
Biography
Warshel was born to a Jewish family in 1940 in kibbutz Sde Nahum, Mandatory Palestine. Warshel served in the Israeli Armored Corps. After serving the Israeli Army (final rank Captain), Warshel attended the Technion, Haifa, where he received his BSc degree in chemistry, summa cum laude, in 1966. Subsequently, he earned both MSc and PhD degrees in Chemical Physics (in 1967 and 1969, respectively), with Shneior Lifson at Weizmann Institute of Science, Israel. After his PhD, he did postdoctoral work at Harvard University until 1972, and from 1972 to 1976 he returned to the Weizmann Institute and worked for the Laboratory of Molecular Biology, Cambridge, England. After being denied tenure by Weizmann Institute in 1976, he joined the faculty of the department of chemistry at USC. He was awarded the 2013 Nobel Prize in Chemistry.
As a soldier, he fought in both the 1967 Six-Day War and the 1973 Yom Kippur War, attaining the rank of captain in the IDF.
As part of Shenzhen's 13th Five-Year Plan funding research in emerging technologies and opening "Nobel laureate research labs", in April 2017 he opened the Warshel Institute for Computational Biology at the Chinese University of Hong Kong, Shenzhen campus.
Honors
Warshel is known for his work on computational biochemistry and biophysics, in particular for pioneering computer simulations of the functions of biological systems, and for developing what is known today as Computational Enzymology.
He is a member of many scientific organisations, most importantly:
Elected member of the United States National Academy of Sciences (2009)
Fellow of the Royal Society of Chemistry (2008)
Fellow of the Biophysical Society (2000)
Fellow of the American Association for the Advancement of Science (2012)
Honorary fellow of the Royal Society of Chemistry (2014)
Honorary doctorate at Bar-Ilan University (2014)
Honorary doctorate of the Faculty of Science and Technology at Uppsala University (2015)
Awards
Annual Award of the International Society of Quantum Biology and Pharmacology (1993)
Tolman Medal (2003)
President's award for computational biology from the ISQBP (2006)
RSC Soft Matter and Biophysical Chemistry Award (2012)
Nobel Prize in Chemistry (2013) together with Martin Karplus and Michael Levitt for "the development of multiscale models for complex chemical systems".
Golden Plate Award of the American Academy of Achievement (2014)
The Founders Award of the Biophysical Society (2014)
The 2013 Israel Chemical Society Gold Medal (2014)
Major research achievements
Arieh Warshel made major contributions in introducing computational methods for structure–function correlation of biological molecules, pioneering and co-pioneering programs, methods and key concepts for detailed computational studies of functional properties of biological molecules using Cartesian-based force field programs, the combined Quantum Chemistry/Molecular mechanics (i.e., QM/MM) method for simulating enzymatic reactions, the first molecular dynamics simulation of a biological process, microscopic electrostatic models for proteins, free energy perturbation in proteins and other key advances. It was for the development of these methods that Warshel shared the 2013 Nobel Prize in Chemistry.
Books
Arieh Warshel. From Kibbutz Fishponds to The Nobel Prize: Taking Molecular Functions into Cyberspace, World Scientific Publishing, 2021.
See also
List of Israeli Nobel laureates
List of Jewish Nobel laureates
Coarse-grained modeling
Empirical valence bond
References
External links
Faculty profile, USC Dornsife
Warshel research group at the University of Southern California
1940 births
Living people
Nobel laureates in Chemistry
American Nobel laureates
Israeli Nobel laureates
American biochemists
American biophysicists
Foreign members of the Russian Academy of Sciences
Israeli biochemists
Israeli biophysicists
Israeli emigrants to the United States
Israeli Jews
Israeli officers
Jewish chemists
Jewish American physicists
Jewish military personnel
Members of the United States National Academy of Sciences
Theoretical chemists
University of Southern California faculty
Academic staff of the Chinese University of Hong Kong, Shenzhen
Weizmann Institute of Science alumni
Computational chemists | Arieh Warshel | [
"Chemistry"
] | 922 | [
"Quantum chemistry",
"Physical chemists",
"Computational chemists",
"Theoretical chemistry",
"Computational chemistry",
"Theoretical chemists"
] |
12,836,631 | https://en.wikipedia.org/wiki/Java%20Evolutionary%20Computation%20Toolkit | ECJ is a freeware evolutionary computation research system written in Java. It is a framework that supports a variety of evolutionary computation techniques, such as genetic algorithms, genetic programming, evolution strategies, coevolution, particle swarm optimization, and differential evolution. The framework models iterative evolutionary processes using a series of pipelines arranged to connect one or more subpopulations of individuals with selection, breeding (such as crossover, and mutation operators that produce new individuals. The framework is open source and is distributed under the Academic Free License. ECJ was created by Sean Luke, a computer science professor at George Mason University, and is maintained by Sean Luke and a variety of contributors.
Features (listed from ECJ's project page):
General Features:
GUI with charting
Platform-independent checkpointing and logging
Hierarchical parameter files
Multithreading
Mersenne Twister Random Number Generators
Abstractions for implementing a variety of EC forms.
EC Features:
Asynchronous island models over TCP/IP
Master/Slave evaluation over multiple processors
Genetic Algorithms/Programming style Steady State and Generational evolution, with or without Elitism
Evolutionary-Strategies style (mu, lambda) and (mu+lambda) evolution
Very flexible breeding architecture
Many selection operators
Multiple subpopulations and species
Inter-subpopulation exchanges
Reading populations from files
Single- and Multi-population coevolution
SPEA2 multiobjective optimization
Particle Swarm Optimization
Differential Evolution
Spatially embedded evolutionary algorithms
Hooks for other multiobjective optimization methods
Packages for parsimony pressure
GP Tree Representations:
Set-based Strongly Typed Genetic Programming
Ephemeral Random Constants
Automatically Defined Functions and Automatically Defined Macros
Multiple tree forests
Six tree-creation algorithms
Extensive set of GP breeding operators
Seven pre-done GP application problem domains (ant, regression, multiplexer, lawnmower, parity, two-box, edge)
Vector (GA/ES) Representations:
Fixed-Length and Variable-Length Genomes
Arbitrary representations
Five pre-done vector application problem domains (sum, rosenbrock, sphere, step, noisy-quartic)
Other Representations:
NEAT
Multiset-based genomes in the rule package, for evolving Pitt-approach rulesets or other set-based representations.
See also
Paradiseo, a metaheuristics framework
MOEA Framework, an open source Java framework for multiobjective evolutionary algorithms
References
ECJ project page
Wilson, G. C. McIntyre, A. Heywood, M. I. (2004), "Resource Review: Three Open Source Systems for Evolving Programs-Lilgp, ECJ and Grammatical Evolution", Genetic Programming And Evolvable Machines, 5 (19): 103-105, Kluwer Academic Publishers. ISSN 1389-2576
Evolutionary computation
Agent-based software
Free software programmed in Java (programming language) | Java Evolutionary Computation Toolkit | [
"Biology"
] | 572 | [
"Bioinformatics",
"Evolutionary computation"
] |
12,837,561 | https://en.wikipedia.org/wiki/DXP%20reductoisomerase | DXP reductoisomerase (1-deoxy-D-xylulose 5-phosphate reductoisomerase or DXR) is an enzyme that interconverts 1-deoxy-D-xylulose 5-phosphate (DXP) and 2-C-methyl-D-erythritol 4-phosphate (MEP).
It is classified under . It is normally abbreviated DXR, but it is sometimes named IspC, as the product of the ispC gene.
DXR is part of the MEP pathway (nonmevalonate pathway) of isoprenoid precursor biosynthesis. DXR is inhibited by fosmidomycin.
This enzyme is required for terpenoid biosynthesis in some organisms, since it is a key enzyme on the MEP pathway for the production of the isoprenoid precursors IPP and DMAPP. In Arabidopsis thaliana 1-deoxy-D-xylulose 5-phosphate reductoisomerase is the first committed enzyme of the MEP pathway for isoprenoid precursor biosynthesis. The enzyme requires Mn2+, Co2+ or Mg2+ for activity, with Mn2+ being most effective.
References
External links
Protein families
EC 1.1.1 | DXP reductoisomerase | [
"Biology"
] | 290 | [
"Protein families",
"Protein classification"
] |
12,837,662 | https://en.wikipedia.org/wiki/2-C-Methylerythritol%204-phosphate | 2-C-Methyl--erythritol 4-phosphate (MEP) is an intermediate on the MEP pathway (non-mevalonate pathway) of isoprenoid precursor biosynthesis. It is the first committed metabolite on that pathway on the route to IPP and DMAPP.
See also
DXP reductoisomerase
MEP pathway (formerly known as the non-mevalonate pathway)
Fosmidomycin
References
External links
Organophosphates
Monosaccharide derivatives | 2-C-Methylerythritol 4-phosphate | [
"Chemistry",
"Biology"
] | 116 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
22,278,053 | https://en.wikipedia.org/wiki/Discontinuities%20of%20monotone%20functions | In the mathematical field of analysis, a well-known theorem describes the set of discontinuities of a monotone real-valued function of a real variable; all discontinuities of such a (monotone) function are necessarily jump discontinuities and there are at most countably many of them.
Usually, this theorem appears in literature without a name. It is called Froda's theorem in some recent works; in his 1929 dissertation, Alexandru Froda stated that the result was previously well-known and had provided his own elementary proof for the sake of convenience. Prior work on discontinuities had already been discussed in the 1875 memoir of the French mathematician Jean Gaston Darboux.
Definitions
Denote the limit from the left by
and denote the limit from the right by
If and exist and are finite then the difference is called the jump of at
Consider a real-valued function of real variable defined in a neighborhood of a point If is discontinuous at the point then the discontinuity will be a removable discontinuity, or an essential discontinuity, or a jump discontinuity (also called a discontinuity of the first kind).
If the function is continuous at then the jump at is zero. Moreover, if is not continuous at the jump can be zero at if
Precise statement
Let be a real-valued monotone function defined on an interval Then the set of discontinuities of the first kind is at most countable.
One can prove that all points of discontinuity of a monotone real-valued function defined on an interval are jump discontinuities and hence, by our definition, of the first kind. With this remark the theorem takes the stronger form:
Let be a monotone function defined on an interval Then the set of discontinuities is at most countable.
Proofs
This proof starts by proving the special case where the function's domain is a closed and bounded interval The proof of the general case follows from this special case.
Proof when the domain is closed and bounded
Two proofs of this special case are given.
Proof 1
Let be an interval and let be a non-decreasing function (such as an increasing function).
Then for any
Let and let be points inside at which the jump of is greater or equal to :
For any so that
Consequently,
and hence
Since we have that the number of points at which the jump is greater than is finite (possibly even zero).
Define the following sets:
Each set is finite or the empty set. The union
contains all points at which the jump is positive and hence contains all points of discontinuity. Since every is at most countable, their union is also at most countable.
If is non-increasing (or decreasing) then the proof is similar. This completes the proof of the special case where the function's domain is a closed and bounded interval.
Proof 2
For a monotone function , let mean that is monotonically non-decreasing and let mean that is monotonically non-increasing. Let is a monotone function and let denote the set of all points in the domain of at which is discontinuous (which is necessarily a jump discontinuity).
Because has a jump discontinuity at so there exists some rational number that lies strictly in between (specifically, if then pick so that while if then pick so that holds).
It will now be shown that if are distinct, say with then
If then implies so that
If on the other hand then implies so that
Either way,
Thus every is associated with a unique rational number (said differently, the map defined by is injective).
Since is countable, the same must be true of
Proof of general case
Suppose that the domain of (a monotone real-valued function) is equal to a union of countably many closed and bounded intervals; say its domain is (no requirements are placed on these closed and bounded intervals).
It follows from the special case proved above that for every index the restriction of to the interval has at most countably many discontinuities; denote this (countable) set of discontinuities by
If has a discontinuity at a point in its domain then either is equal to an endpoint of one of these intervals (that is, ) or else there exists some index such that in which case must be a point of discontinuity for (that is, ).
Thus the set of all points of at which is discontinuous is a subset of which is a countable set (because it is a union of countably many countable sets) so that its subset must also be countable (because every subset of a countable set is countable).
In particular, because every interval (including open intervals and half open/closed intervals) of real numbers can be written as a countable union of closed and bounded intervals, it follows that any monotone real-valued function defined on an interval has at most countable many discontinuities.
To make this argument more concrete, suppose that the domain of is an interval that is not closed and bounded (and hence by Heine–Borel theorem not compact).
Then the interval can be written as a countable union of closed and bounded intervals with the property that any two consecutive intervals have an endpoint in common:
If then where is a strictly decreasing sequence such that In a similar way if or if
In any interval there are at most countable many points of discontinuity, and since a countable union of at most countable sets is at most countable, it follows that the set of all discontinuities is at most countable.
Jump functions
Examples. Let 1 < 2 < 3 < ⋅⋅⋅ be a countable subset of the compact interval [,] and let μ1, μ2, μ3, ... be a positive sequence with finite sum. Set
where χA denotes the characteristic function of a compact interval . Then is a non-decreasing function on [,], which is continuous except for jump discontinuities at for ≥ 1. In the case of finitely many jump discontinuities, is a step function. The examples above are generalised step functions; they are very special cases of what are called jump functions or saltus-functions.
More generally, the analysis of monotone functions has been studied by many mathematicians, starting from Abel, Jordan and Darboux. Following , replacing a function by its negative if necessary, only the case of non-negative non-decreasing functions has to be considered. The domain [,] can be finite or have ∞ or −∞ as endpoints.
The main task is to construct monotone functions — generalising step functions — with discontinuities at a given denumerable set of points and with prescribed left and right discontinuities at each of these points.
Let ( ≥ 1) lie in (, ) and take λ1, λ2, λ3, ... and μ1, μ2, μ3, ... non-negative with finite sum and with λ + μ > 0 for each . Define
for for
Then the jump function, or saltus-function, defined by
is non-decreasing on [, ] and is continuous except for jump discontinuities at for ≥ 1.
To prove this, note that sup || = λ + μ, so that Σ converges uniformly to . Passing to the limit, it follows that
and
if is not one of the 's.
Conversely, by a differentiation theorem of Lebesgue, the jump function is uniquely determined by the properties: (1) being non-decreasing and non-positive; (2) having given jump data at its points of discontinuity ; (3) satisfying the boundary condition () = 0; and (4) having zero derivative almost everywhere.
Property (4) can be checked following , and . Without loss of generality, it can be assumed that is a non-negative jump function defined on the compact [,], with discontinuities only in (,).
Note that an open set of (,) is canonically the disjoint union of at most countably many open intervals ; that allows the total length to be computed ℓ()= Σ ℓ(). Recall that a null set is a subset such that, for any arbitrarily small ε' > 0, there is an open containing with ℓ() < ε'. A crucial property of length is that, if and are open in (,), then ℓ() + ℓ() = ℓ( ∪ ) + ℓ( ∩ ). It implies immediately that the union of two null sets is null; and that a finite or countable set is null.
Proposition 1. For > 0 and a normalised non-negative jump function , let () be the set of points such that
for some , with < < . Then
() is open and has total length ℓ(()) ≤ 4 −1 (() – ()).
Note that () consists the points where the slope of is greater that near . By definition () is an open subset of (, ), so can be written as a disjoint union of at most countably many open intervals = (, ). Let be an interval with closure in and ℓ() = ℓ()/2. By compactness, there are finitely many open intervals of the form (,) covering the closure of . On the other hand, it is elementary that, if three fixed bounded open intervals have a common point of intersection, then their union contains one of the three intervals: indeed just take the supremum and infimum points to identify the endpoints. As a result, the finite cover can be taken as adjacent open intervals (,), (,), ... only intersecting at consecutive intervals. Hence
Finally sum both sides over .
Proposition 2. If is a jump function, then '() = 0 almost everywhere.
To prove this, define
a variant of the Dini derivative of . It will suffice to prove that for any fixed > 0, the Dini derivative satisfies () ≤ almost everywhere, i.e. on a null set.
Choose ε > 0, arbitrarily small. Starting from the definition of the jump function = Σ , write = + with = Σ≤ and = Σ> where ≥ 1. Thus is a step function having only finitely many discontinuities at for ≤ and is a non-negative jump function. It follows that = ' + = except at the points of discontinuity of . Choosing sufficiently large so that Σ> λ + μ < ε, it follows that is a jump function such that () − () < ε and ≤ off an open set with length less than 4ε/.
By construction ≤ off an open set with length less than 4ε/. Now set ε' = 4ε/ — then ε' and are arbitrarily small and ≤ off an open set of length less than ε'. Thus ≤ almost everywhere. Since could be taken arbitrarily small, and hence also ' must vanish almost everywhere.
As explained in , every non-decreasing non-negative function can be decomposed uniquely as a sum of a jump function and a continuous monotone function : the jump function is constructed by using the jump data of the original monotone function and it is easy to check that = − is continuous and monotone.
See also
Monotone function
Notes
References
Bibliography
(subscription required)
; reprinted by Dover, 2003
Reprint of the 1955 original.
Articles containing proofs
Theory of continuous functions
Theorems in real analysis | Discontinuities of monotone functions | [
"Mathematics"
] | 2,413 | [
"Theorems in mathematical analysis",
"Theory of continuous functions",
"Theorems in real analysis",
"Topology",
"Articles containing proofs"
] |
22,279,266 | https://en.wikipedia.org/wiki/Diffiety | In mathematics, a diffiety () is a geometrical object which plays the same role in the modern theory of partial differential equations that algebraic varieties play for algebraic equations, that is, to encode the space of solutions in a more conceptual way. The term was coined in 1984 by Alexandre Mikhailovich Vinogradov as portmanteau from differential variety.
Intuitive definition
In algebraic geometry the main objects of study (varieties) model the space of solutions of a system of algebraic equations (i.e. the zero locus of a set of polynomials), together with all their "algebraic consequences". This means that, applying algebraic operations to this set (e.g. adding those polynomials to each other or multiplying them with any other polynomials) will give rise to the same zero locus. In other words, one can actually consider the zero locus of the algebraic ideal generated by the initial set of polynomials.
When dealing with differential equations, apart from applying algebraic operations as above, one has also the option to differentiate the starting equations, obtaining new differential constraints. Therefore, the differential analogue of a variety should be the space of solutions of a system of differential equations, together with all their "differential consequences". Instead of considering the zero locus of an algebraic ideal, one needs therefore to work with a differential ideal.
An elementary diffiety will consist therefore of the infinite prolongation of a differential equation , together with an extra structure provided by a special distribution. Elementary diffieties play the same role in the theory of differential equations as affine algebraic varieties do in the theory of algebraic equations. Accordingly, just like varieties or schemes are composed of irreducible affine varieties or affine schemes, one defines a (non-elementary) diffiety as an object that locally looks like an elementary diffiety.
Formal definition
The formal definition of a diffiety, which relies on the geometric approach to differential equations and their solutions, requires the notions of jets of submanifolds, prolongations, and Cartan distribution, which are recalled below.
Jet spaces of submanifolds
For instance, for one recovers just points in and for one recovers the Grassmannian of -dimensional subspaces of . More generally, all the projections are fibre bundles.
As a particular case, when has a structure of fibred manifold over an -dimensional manifold , one can consider submanifolds of given by the graphs of local sections of . Then the notion of jet of submanifolds boils down to the standard notion of jet of sections, and the jet bundle turns out to be an open and dense subset of .
Prolongations of submanifolds
The -jet prolongation of a submanifold is
The map is a smooth embedding and its image , called the prolongation of the submanifold , is a submanifold of diffeomorphic to .
Cartan distribution on jet spaces
A space of the form , where is any submanifold of whose prolongation contains the point , is called an -plane (or jet plane, or Cartan plane) at . The Cartan distribution on the jet space is the distribution defined bywhere is the span of all -planes at .
Differential equations
A differential equation of order on the manifold is a submanifold ; a solution is defined to be an -dimensional submanifold such that . When is a fibred manifold over , one recovers the notion of partial differential equations on jet bundles and their solutions, which provide a coordinate-free way to describe the analogous notions of mathematical analysis. While jet bundles are enough to deal with many equations arising in geometry, jet spaces of submanifolds provide a greater generality, used to tackle several PDEs imposed on submanifolds of a given manifold, such as Lagrangian submanifolds and minimal surfaces.
As in the jet bundle case, the Cartan distribution is important in the algebro-geometric approach to differential equations because it allows to encode solutions in purely geometric terms. Indeed, a submanifold is a solution if and only if it is an integral manifold for , i.e. for all .
One can also look at the Cartan distribution of a PDE more intrinsically, definingIn this sense, the pair encodes the information about the solutions of the differential equation .
Prolongations of PDEs
Given a differential equation of order , its -th prolongation is defined aswhere both and are viewed as embedded submanifolds of , so that their intersection is well-defined. However, such an intersection is not necessarily a manifold again, hence may not be an equation of order . One therefore usually requires to be "nice enough" such that at least its first prolongation is indeed a submanifold of .
Below we will assume that the PDE is formally integrable, i.e. all prolongations are smooth manifolds and all projections are smooth surjective submersions. Note that a suitable version of Cartan–Kuranishi prolongation theorem guarantees that, under minor regularity assumptions, checking the smoothness of a finite number of prolongations is enough. Then the inverse limit of the sequence extends the definition of prolongation to the case when goes to infinity, and the space has the structure of a profinite-dimensional manifold.
Definition of a diffiety
An elementary diffiety is a pair where is a -th order differential equation on some manifold, its infinite prolongation and its Cartan distribution. Note that, unlike in the finite case, one can show that the Cartan distribution is -dimensional and involutive. However, due to the infinite-dimensionality of the ambient manifold, the Frobenius theorem does not hold, therefore is not integrable
A diffiety is a triple , consisting of
a (generally infinite-dimensional) manifold
the algebra of its smooth functions
a finite-dimensional distribution ,
such that is locally of the form , where is an elementary diffiety and denotes the algebra of smooth functions on . Here locally means a suitable localisation with respect to the Zariski topology corresponding to the algebra .
The dimension of is called dimension of the diffiety and its denoted by , with a capital D (to distinguish it from the dimension of as a manifold).
Morphisms of diffieties
A morphism between two diffieties and consists of a smooth map whose pushforward preserves the Cartan distribution, i.e. such that, for every point , one has .
Diffieties together with their morphisms define the category of differential equations.
Applications
Vinogradov sequence
The Vinogradov -spectral sequence (or, for short, Vinogradov sequence) is a spectral sequence associated to a diffiety, which can be used to investigate certain properties of the formal solution space of a differential equation by exploiting its Cartan distribution .
Given a diffiety , consider the algebra of differential forms over
and the corresponding de Rham complex:
Its cohomology groups contain some structural information about the PDE; however, due to the Poincaré Lemma, they all vanish locally. In order to extract much more and even local information, one thus needs to take the Cartan distribution into account and introduce a more sophisticated sequence. To this end, let
be the submodule of differential forms over whose restriction to the distribution vanishes, i.e.
Note that is actually a differential ideal since it is stable w.r.t. to the de Rham differential, i.e. .
Now let be its -th power, i.e. the linear subspace of generated by . Then one obtains a filtration
and since all ideals are stable, this filtration completely determines the following spectral sequence:
The filtration above is finite in each degree, i.e. for every
so that the spectral sequence converges to the de Rham cohomology of the diffiety. One can therefore analyse the terms of the spectral sequence order by order to recover information on the original PDE. For instance:
corresponds to action functionals constrained by the PDE . In particular, for , the corresponding Euler-Lagrange equation is .
corresponds to conservation laws for solutions of .
is interpreted as characteristic classes of bordisms of solutions of .
Many higher-order terms do not have an interpretation yet.
Variational bicomplex
As a particular case, starting with a fibred manifold and its jet bundle instead of the jet space , instead of the -spectral sequence one obtains the slightly less general variational bicomplex.
More precisely, any bicomplex determines two spectral sequences: one of the two spectral sequences determined by the variational bicomplex is exactly the Vinogradov -spectral sequence. However, the variational bicomplex was developed independently from the Vinogradov sequence.
Similarly to the terms of the spectral sequence, many terms of the variational bicomplex can be given a physical interpretation in classical field theory: for example, one obtains cohomology classes corresponding to action functionals, conserved currents, gauge charges, etc.
Secondary calculus
Vinogradov developed a theory, known as secondary calculus, to formalise in cohomological terms the idea of a differential calculus on the space of solutions of a given system of PDEs (i.e. the space of integral manifolds of a given diffiety).
In other words, secondary calculus provides substitutes for functions, vector fields, differential forms, differential operators, etc., on a (generically) very singular space where these objects cannot be defined in the usual (smooth) way on the space of solution. Furthermore, the space of these new objects are naturally endowed with the same algebraic structures of the space of the original objects.
More precisely, consider the horizontal De Rham complex of a diffiety, which can be seen as the leafwise de Rham complex of the involutive distribution or, equivalently, the Lie algebroid complex of the Lie algebroid . Then the complex becomes naturally a commutative DG algebra together with a suitable differential . Then, possibly tensoring with the normal bundle , its cohomology is used to define the following "secondary objects":
secondary functions are elements of the cohomology , which is naturally a commutative DG algebra (it is actually the first page of the -spectral sequence discussed above);
secondary vector fields are elements of the cohomology , which is naturally a Lie algebra; moreover, it forms a graded Lie-Rinehart algebra together with ;
secondary differential -forms are elements of the cohomology , which is naturally a commutative DG algebra.
Secondary calculus can also be related to the covariant Phase Space, i.e. the solution space of the Euler-Lagrange equations associated to a Lagrangian field theory.
See also
Secondary calculus and cohomological physics
Partial differential equations on Jet bundles
Differential ideal
Differential calculus over commutative algebras
Another way of generalizing ideas from algebraic geometry is differential algebraic geometry.
References
External links
The Diffiety Institute (frozen since 2010)
The Levi-Civita Institute (successor of above site)
Geometry of Differential Equations
Differential Geometry and PDEs
Homological algebra
Partial differential equations
Differential geometry
Manifolds | Diffiety | [
"Mathematics"
] | 2,328 | [
"Mathematical structures",
"Space (mathematics)",
"Topological spaces",
"Fields of abstract algebra",
"Topology",
"Category theory",
"Manifolds",
"Homological algebra"
] |
22,282,528 | https://en.wikipedia.org/wiki/BMW%20Central%20Building | The BMW Central Building Located in Leipzig, Germany was the winning design submitted for competition by Pritzker Prize winning architect, Zaha Hadid. The central building is the nerve center for BMW's new $1.55 billion complex built to manufacture the BMW 3 Series.
Concept
The BMW factory prior to the construction of the central building existed as three disconnected buildings, each playing an integral part in the production of BMW 3 Series vehicles. These three production buildings were designed in-house by BMW's real estate and facility management group, housing separately the fabrication of raw auto bodies (), the paint shop (), and the final assembly hall (). A competition was held for the design of a central building to function as the physical connection of the three units. It also needed to house the administrative and employee needs spaces. Hadid's design took this idea of connectivity and used it to inform every aspect of the new building. It serves as a connection for the assembly process steps and the employees. Designed as a series of overlapping and interconnecting levels and spaces, it blurs the separation between parts of the complex and creates a level ground for both blue and white collar employees, visitors, and the cars.
The building
From a pool of 25 international architects , the BMW jury chose the very innovative design of Zaha Hadid as the final piece of the BMW plant in Leipzig, Germany. With no real precedent for her design, Zaha Hadid's Central Building can only be related to the revolutionary and monumental industrial designs of the past including Fiat Lingotto Factory by G. Matté-Trucco and the AEG Turbine Factory in Berlin by Peter Behrens. The BMW Central building is a facility that makes up only of the campus. Serving 5,500 employees, the building functions as the most important piece of the factory, connecting the three production sheds. Each day, 650 BMW 3 Series sedans pass through the Central Building on an elevated conveyor as they move from one of the three production sheds to the next. Dim blue LED lights highlight the vehicles after each stage, as they exit one of the sheds. These conveyors not only take the vehicles from one production shed to another, but do so directly through all of the functional spaces of the Central Building. The offices, meeting rooms, and public relations facilities are all built around these elevated conveyors, creating an interesting relationship between the employees, the cars, and the public. Not only is the Central Building an office building and public relations center for the factory, it is also a very important piece of the production process at the factory. All of the load-bearing walls, floors, and office levels are made of cast-in-place concrete, while the roof structure is composed of structural steel beams and space frame construction. The facade is clad in simple materials of like corrugated metal, channel glass, and glass curtain walls .
The buildings has received numerous architectural awards, including a 2006 RIBA European Award, and was placed on the shortlist for the Stirling Prize.
Diagrammatic Plan Of BMW Central Building Productions Sheds
Quick facts
Building Name: BMW Central Building
Location: Leipzig, Germany
Client: BMW AG, Munich, Germany
Architect: Zaha Hadid
Building Footprint:
Total Area:
Building Cost: $60 million
Groundbreaking: March 2003
Completion: May 2005
Employees in Factory: 5,500
Program: Control Functions, Offices/Admin., Meeting rooms, Cafeteria, Public Relations
Parking: 4,100 Spaces
Total Complex Cost: $1.55 Billion
Timeline
November 2001: Competition Phase #1
November 2002: Competition Phase #2
March 2002: Jury Decision
August 2002: Design Development Completed
2002/2003: Construction Documents Completed/Bidding and Negotiations
March 2003: Groundbreaking/Construction Commenced
January 2004: Steel Construction Completed
March 2004: Building Enclosed
June 2004: Car Park Completed
September 2004: Central Building Completed
May 2005: Landscaping Completed/Central Building Opened
References
Hadid, Zaha (2006). BMW Central Building. New York, NY: Princeton Architectural Press.
Architectural Record | Project Portfolio | BMW Plant Leipzig, Central Building. Retrieved April 4, 2009, from Architecture Design for Architects | Architectural Record Web site: http://archrecord.construction.com/projects/portfolio/archives/0508BMW.asp
BMW Central Building, Plant Leipzig by Zaha Hadid Architects | Project Portfolio | Architecture-Page. Retrieved April 4, 2009, from Architecture-Page | International resource for architecture and design Web site: https://web.archive.org/web/20090127024530/http://www.architecture-page.com/go/projects/bmw-central-building-plant-leipzig
Barreneche, Raul (2005, August). Zaha Hadid provides the connective tissue for a BMW complex by designing central building that brings People and cars together. Architectural Record, 193(8), 82-91.
(2005, August). Zaha Hadid: BMW Plant Leipzig, Central Building, Leipzig, Germany, 2005.. A + U: architecture and urbanism, 419(8), 66-85.
External links
(n.d.). Zaha Hadid Architects. Retrieved April 5, 2009, from : http://www.zaha-hadid.com
(n.d.). Zaha Hadid: BMW Central Building, Leipzig, Germany…Amazon.co.uk: Todd Gannon: Books. Retrieved April 5, 2009, from Amazon.co.uk: low prices in Electronics, Books, Music, DVDs & more: https://www.amazon.co.uk/Zaha-Hadid-Central-Building-Architecture/dp/1568985363
(2007, April 4). BMW Central Building. Retrieved April 4, 2009, from Architectook: https://web.archive.org/web/20090107070248/http://architectook.net/bmw-central-building/
(n.d.). The Tamed Snake - BMW's Central Building in Leipzig. Retrieved April 5, 2009, from DETAIL.de - Architekturportal und Architek: https://web.archive.org/web/20110314215211/http://www.detail.de/rw_5_Archive_En_HoleArtikel_5536_Artikel.htm
(n.d.). Zaha Hadid Architects - Central Building . Retrieved April 5, 2009, from Architecture Online: https://web.archive.org/web/20090329095101/http://www.arcspace.com/architects/hadid/bmw_central/bmw_central.html
Zaha Hadid buildings
Buildings and structures in Leipzig
Postmodern architecture
Modernist architecture in Germany
BMW
Buildings and structures completed in 2005 | BMW Central Building | [
"Engineering"
] | 1,415 | [
"Postmodern architecture",
"Architecture"
] |
22,283,794 | https://en.wikipedia.org/wiki/Hitchin%E2%80%93Thorpe%20inequality | In differential geometry the Hitchin–Thorpe inequality is a relation which restricts the topology of 4-manifolds that carry an Einstein metric.
Statement of the Hitchin–Thorpe inequality
Let M be a closed, oriented, four-dimensional smooth manifold. If there exists a Riemannian metric on M which is an Einstein metric, then
where is the Euler characteristic of and is the signature of .
This inequality was first stated by John Thorpe in a footnote to a 1969 paper focusing on manifolds of higher dimension. Nigel Hitchin then rediscovered the inequality, and gave a complete characterization of the equality case in 1974; he found that if is an Einstein manifold for which equality in the Hitchin-Thorpe inequality is obtained, then the Ricci curvature of is zero; if the sectional curvature is not identically equal to zero, then is a Calabi–Yau manifold whose universal cover is a K3 surface.
Already in 1961, Marcel Berger showed that the Euler characteristic is always non-negative.
Proof
Let be a four-dimensional smooth Riemannian manifold which is Einstein. Given any point of , there exists a -orthonormal basis of the tangent space such that the curvature operator , which is a symmetric linear map of into itself, has matrix
relative to the basis . One has that is zero and that is one-fourth of the scalar curvature of at . Furthermore, under the conditions and , each of these six functions is uniquely determined and defines a continuous real-valued function on .
According to Chern-Weil theory, if is oriented then the Euler characteristic and signature of can be computed by
Equipped with these tools, the Hitchin-Thorpe inequality amounts to the elementary observation
Failure of the converse
A natural question to ask is whether the Hitchin–Thorpe inequality provides a sufficient condition for the existence of Einstein metrics. In 1995, Claude LeBrun and
Andrea Sambusetti independently showed that the answer is no: there exist infinitely many non-homeomorphic compact, smooth, oriented 4-manifolds that carry no Einstein metrics but nevertheless satisfy
LeBrun's examples are actually simply connected, and the relevant obstruction depends on the smooth structure of the manifold. By contrast, Sambusetti's obstruction only applies to 4-manifolds with infinite fundamental group, but the volume-entropy estimate he uses to prove non-existence only depends on the homotopy type of the manifold.
Footnotes
References
Einstein manifolds
Geometric inequalities
Einstein manifold | Hitchin–Thorpe inequality | [
"Mathematics"
] | 504 | [
"Metric spaces",
"Riemannian manifolds",
"Space (mathematics)",
"Geometric inequalities",
"Inequalities (mathematics)",
"Theorems in geometry"
] |
22,284,043 | https://en.wikipedia.org/wiki/Schreier%20domain | In abstract algebra, a Schreier domain, named after Otto Schreier, is an integrally closed domain where every nonzero element is primal; i.e., whenever x divides yz, x can be written as x = x1 x2 so that x1 divides y and x2 divides z. An integral domain is said to be pre-Schreier if every nonzero element is primal. A GCD domain is an example of a Schreier domain. The term "Schreier domain" was introduced by P. M. Cohn in 1960s. The term "pre-Schreier domain" is due to Muhammad Zafrullah.
In general, an irreducible element is primal if and only if it is a prime element. Consequently, in a pre-Schreier domain, every irreducible is prime. In particular, an atomic pre-Schreier domain is a unique factorization domain; this generalizes the fact that an atomic GCD domain is a UFD.
References
Cohn, P.M., Bezout rings and their subrings, 1968.
Zafrullah, Muhammad, On a property of pre-Schreier domains, 1987.
Ring theory | Schreier domain | [
"Mathematics"
] | 258 | [
"Fields of abstract algebra",
"Ring theory"
] |
22,285,157 | https://en.wikipedia.org/wiki/MRNA%20surveillance | mRNA surveillance mechanisms are pathways utilized by organisms to ensure fidelity and quality of messenger RNA (mRNA) molecules. There are a number of surveillance mechanisms present within cells. These mechanisms function at various steps of the mRNA biogenesis pathway to detect and degrade transcripts that have not properly been processed.
Overview
The translation of messenger RNA transcripts into proteins is a vital part of the central dogma of molecular biology. mRNA molecules are, however, prone to a host of fidelity errors which can cause errors in translation of mRNA into quality proteins. RNA surveillance mechanisms are methods cells use to assure the quality and fidelity of the mRNA molecules. This is generally achieved through marking aberrant mRNA molecule for degradation by various endogenous nucleases.
mRNA surveillance has been documented in bacteria and yeast. In eukaryotes, these mechanisms are known to function in both the nucleus and cytoplasm. Fidelity checks of mRNA molecules in the nucleus results in the degradation of improperly processed transcripts before export into the cytoplasm. Transcripts are subject to further surveillance once in the cytoplasm. Cytoplasmic surveillance mechanisms assess mRNA transcripts for the absence of or presence of premature stop codons.
Three surveillance mechanisms are currently known to function within cells: the nonsense-mediated mRNA decay pathway (NMD); the nonstop mediated mRNA decay pathways (NSD); and the no-go mediated mRNA decay pathway (NGD).
Nonsense-mediated mRNA decay
Overview
Nonsense-mediated decay is involved in detection and decay of mRNA transcripts which contain premature termination codons (PTCs). PTCs can arise in cells through various mechanisms: germline mutations in DNA; somatic mutations in DNA; errors in transcription; or errors in post transcriptional mRNA processing. Failure to recognize and decay these mRNA transcripts can result in the production of truncated proteins which may be harmful to the organism. By causing decay of C-terminally truncated polypeptides, the NMD mechanism can protect cells against deleterious dominant-negative, and gain of function effects. PTCs have been implicated in approximately 30% of all inherited diseases; as such, the NMD pathway plays a vital role in assuring overall survival and fitness of an organism.
A surveillance complex consisting of various proteins (eRF1, eRF3, Upf1, Upf2 and Upf3) is assembled and scans the mRNA for premature stop codons. The assembly of this complex is triggered by premature translation termination. If a premature stop codon is detected then the mRNA transcript is signalled for degradation – the coupling of detection with degradation occurs.
Seven smg genes (smg1-7) and three UPF genes (Upf1-3) have been identified in Saccharomyces cerevisiae and Caenorhabditis elegans as essential trans-acting factors contributing to NMD activity. All of these genes are conserved in Drosophila melanogaster and further mammals where they also play critical roles in NMD. Throughout eukaryotes there are three components which are conserved in the process of NMD. These are the Upf1/SMG-2, Upf2/SMG-3 and Upf3/SMG-4 complexes. Upf1/SMG-2 is a phosphoprotein in multicellular organisms and is thought to contribute to NMD via its phosphorylation activity. However, the exact interactions of the proteins and their roles in NMD are currently disputed.
Mechanism in mammals
A premature stop codon must be recognized as different from a normal stop codon so that only the former triggers a NMD response. It has been observed that the ability of a nonsense codon to cause mRNA degradation depends on its relative location to the downstream sequence element and associated proteins. Studies have demonstrated that nucleotides more than 50–54 nucleotides upstream of the last exon-exon junction can target mRNA for decay. Those downstream from this region are unable to do so. Thus, nonsense codons lie more than 50-54 nucleotides upstream from the last exon boundary whereas natural stop codons are located within terminal exons. Exon junction complexes (EJCs) mark the exon-exon boundaries. EJCs are multiprotein complexes that assemble during splicing at a position about 20–24 nucleotides upstream from the splice junction. It is this EJC that provides position information needed to discriminate premature stop codons from natural stop codons. Recognition of PTCs appears to be dependent on the definitions of the exon-exon junctions. This suggests involvement of the spliceosome in mammalian NMD. Research has investigated the possibility of spliceosome involvement in mammalian NMD and has determined this is a likely possibility. Furthermore, it has been observed that NMD mechanisms are not activated by nonsense transcripts that are generated from genes that naturally do not contain introns (e.g., Histone H4, Hsp70, melanocortin-4-receptor).
When the ribosome reaches a PTC the translation factors eRF1 and eRF3 interact with retained EJC complexes though a multiprotein bridge. The interactions of UPF1 with the terminating complex and with UPF2/UPF3 of the retained EJCs are critical. It is these interactions which target the mRNA for rapid decay by endogenous nucleases
Mechanism in invertebrates
Studies involving organisms such as S. cerevisiae, D.melanogaster and C. elegans have shown that PTC recognition involving invertebrate organisms does not involve exon-exon boundaries. These studies suggest that invertebrate NMD occurs independently of splicing. As a result, EJCs which are responsible for marking exon-exon boundaries are not required in invertebrate NMD. Several models have been proposed to explain how PTCs are distinguished from normal stop codons in invertebrates. One of these suggests that there may be a downstream sequence element which functions similar to the exon junctions in mammals. A second model proposes that a widely present feature in mRNA, such as a 3' poly-A tail, might provide the positional information required for recognition. Another model, dubbed the "faux 3'UTR model", suggests that premature translation termination may be distinguished from normal termination because of intrinsic features that may allow it to recognize its presence in an inappropriate environment. These mechanisms, however, have yet to be conclusively demonstrated.
Mechanism in plants
There are two mechanisms of PTC recognition in plants: according to its distance from the EJC (like in vertebrates) or from the poly-A tail. The NMD mechanism in plants induces the decay of mRNAs containing a 3’UTR longer than 300 nucleotides, that is why the proportion of mRNAs with longer 3'UTRs is much lower in plants than in vertebrates.
NMD avoidance
mRNAs with nonsense mutations are generally thought to be targeted for decay via the NMD pathways. The presence of this premature stop codon about 50-54 nucleotides 5' to the exon junction appears to be the trigger for rapid decay; however, it has been observed that some mRNA molecules with a premature stop codon are able to avoid detection and decay. In general, these mRNA molecules possess the stop codon very early in the reading frame (i.e. the PTC is AUG-proximal). This appears to be a contradiction to the current accepted model of NMD as this position is significantly 5' of the exon-exon junction.
This has been demonstrated in β-globulin. β-globulin mRNAs containing a nonsense mutation early in the first exon of the gene are more stable than NMD sensitive mRNA molecules. The exact mechanism of detection avoidance is currently not known. It has been suggested that the poly-A binding protein (PABP) appears to play a role in this stability. It has been demonstrated in other studies that the presence of this protein near AUG-proximal PTCs appears to promote the stability of these otherwise NMD sensitive mRNAs. It has been observed that this protective effect is not limited only to the β-globulin promoter. This suggests that this NMD avoidance mechanism may be prevalent in other tissue types for a variety of genes. The current model of NMD may need to be revisited upon further studies.
Nonstop mediated mRNA decay
Overview
Nonstop mediated decay (NSD) is involved in the detection and decay of mRNA transcripts which lack a stop codon. These mRNA transcripts can arise from many different mechanisms such as premature 3' adenylation or cryptic polyadenylation signals within the coding region of a gene. This lack of a stop codon results a significant issue for cells. Ribosomes translating the mRNA eventually translate into the 3'poly-A tail region of transcripts and stalls. As a result, it cannot eject the mRNA. Ribosomes thus may become sequestered associated with the nonstop mRNA and would not be available to translate other mRNA molecules into proteins. Nonstop mediated decay resolves this problem by both freeing the stalled ribosomes and marking the nonstop mRNA for degradation in the cell by nucleases. Nonstop mediated decay consists of two distinct pathways which likely act in concert to decay nonstop mRNA.
Ski7 pathway
This pathway is active when Ski7 protein is available in the cell. The Ski7 protein is thought to bind to the empty A site of the ribosome. This binding allows the ribosome to eject the stuck nonstop mRNA molecule – this even frees the ribosome and allows it to translate other transcripts. The Ski7 is now associated with the nonstop mRNA and it is this association which targets the nonstop mRNA for recognition by the cytosolic exosome. The Ski7-exosome complex rapidly deadenylates the mRNA molecule which allows the exosome to decay the transcript in a 3' to 5' fashion.
Non-Ski7 pathway
A second type of NSD has been observed in yeast. In this mechanism, the absence of Ski7 results in the loss of poly-A tail binding PABP proteins by the action of the translation ribosome. The removal of these PABP proteins then results in the loss of the protective 5'm7G cap. The loss of the cap results in rapid degradation of the transcript by an endogenous 5'-3' exonuclease such as XrnI.
No-Go decay
No-Go decay (NGD) is the most recently discovered surveillance mechanism. As such, it is not currently well understood. While authentic targets of NGD are poorly understood, they appear to consist largely of mRNA transcripts on which ribosomes have stalled during translation. This stall can be caused by a variety of factors including strong secondary structures, which may physically block the translational machinery from moving down the transcript. Dom34/Hbs1 likely binds near the A site of stalled ribosomes and may facilitate recycling of complexes. In some cases, the transcript is also cleaved in an endonucleolytic fashion near the stall site; however the identity of the responsible endonuclease remains contentious. The fragmented mRNA molecules are then fully degraded by the exosome in a 3' to 5' fashion and by Xrn1 in a 5' to 3' fashion.
It is not currently known how this process releases the mRNA from the ribosomes, however, Hbs1 is closely related to the Ski7 protein which plays a clear role in ribosome release in Ski7 mediated NSD. It is postulated that Hbs1 may play a similar role in NGD.
Evolution
It is possible to determine the evolutionary history of these mechanisms by observing the conservation of key proteins implicated in each mechanism. For example: Dom34/Hbs1 are associated with NGD; Ski7 is associated with NSD; and the eRF proteins are associated with NMD. To this end, extensive BLAST searches have been performed to determine the prevalence of the proteins in various types of organisms. It has been determined that NGD Hbs1 and NMD eRF3 are found only in eukaryotes. However, the NGD Dom34 is universal in eukaryotes and archaea. This suggests that NGD appears to have been the first evolved mRNA surveillance mechanism. The NSD Ski7 protein appears to be restricted strictly to yeast species which suggest that NSD is the most recently evolved surveillance mechanism. This by default leaves NMD as the second evolved surveillance mechanism.
References
External links
The Daily Transcript: No-go Decay
Chemistry of Life: non stop decay
Notes mRNA surveillance (Yang Xu)
Molecular genetics
RNA | MRNA surveillance | [
"Chemistry",
"Biology"
] | 2,644 | [
"Molecular genetics",
"Molecular biology"
] |
22,286,330 | https://en.wikipedia.org/wiki/Nikolay%20Enikolopov | Nikolay Sergeyevich Enikolopov (Enikolopyan) (; March 13, 1924, Stepanakert – January 22, 1993, Berlin) was an Armenian-Russian scientist, Doctor of Chemistry, professor, academic of Russian Academy of Sciences, director of the Institute of Synthetic polymers (Currently Nikolay Enikolopov Institute of Synthetic Polymers) of the Russian Academy of Sciences. In 1980 he was awarded by the Lenin Prize.
In 1945 Enikolopov graduated from the Yerevan Polytechnic Institute. Since 1946 he worked at the Moscow Institute of Chemical Physics of the Russian Academy of Sciences, headed the laboratory of polymers. In 1982-1993 he headed also a department at the Moscow Institute of Physics and Technology.
References
1924 births
1993 deaths
Soviet Armenians
Russian chemists
Soviet chemists
20th-century chemists
People from Stepanakert
Polymer scientists and engineers
National Polytechnic University of Armenia alumni
Academic staff of the Moscow Institute of Physics and Technology
Full Members of the USSR Academy of Sciences
Full Members of the Russian Academy of Sciences
Recipients of the Lenin Prize
Recipients of the Order of Lenin
Recipients of the Order of Friendship of Peoples | Nikolay Enikolopov | [
"Chemistry",
"Materials_science"
] | 228 | [
"Polymer scientists and engineers",
"Physical chemists",
"Polymer chemistry"
] |
22,286,701 | https://en.wikipedia.org/wiki/The%20Mysterious%20Numbers%20of%20the%20Hebrew%20Kings | The Mysterious Numbers of the Hebrew Kings (1951) is a reconstruction of the chronology of the kingdoms of Israel and Judah by Edwin R. Thiele. The book was originally his doctoral dissertation and is widely regarded as the definitive work on the chronology of Hebrew Kings. The book is considered the classic and comprehensive work in reckoning the accession of kings, calendars, and co-regencies, based on biblical and extra-biblical sources.
Biblical chronology
The chronology of the kings of Israel and Judah rests primarily on a series of reign lengths and cross references within the books of Kings and Chronicles, in which the accession of each king is dated in terms of the reign of his contemporary in either the southern Kingdom of Judah or the northern Kingdom of Israel, and fitting them into the chronology of other ancient civilizations.
However, some of the biblical cross references did not seem to match, so that a reign which is said to have lasted for 20 years results in a cross reference that would give a result of either 19 or 21 years. Thiele noticed that the cross references given during the long reign of King Asa of Judah had a cumulative error of 1 year for each succeeding reign of the kings of Israel: the first cross-reference resulted in an error of 1 year, the second gave an error of 2 years, the third of 3 years and so on. He explained this pattern as a result of two different methods of reckoning regnal years: the accession year method in one and the non-accession year method in the other. Under the accession year method, if a king died in the middle of a year, the period to the end of that year would be called the "accession year" of the new king, whose Year 1 would begin at the new year. Under the non-accession year method the period to the end of the year would be Year 1 of the new king and Year 2 would begin at the start of the new year. Israel appears to have used the non-accession method, while Judah used the accession method until Athaliah seized power in Judah, when Israel's non-accession method appears to have been adopted in Judah.
In addition, Thiele also concluded that Israel counted years starting in the spring month of Nisan, while Judah counted years starting in the autumn month of Tishri. The cumulative impact of differing new years and different methods of calculating reigns explained, to Thiele, most of the apparent inconsistencies in the cross references.
Unknown to Thiele when he first published his findings, these same conclusions that the northern kingdom used non-accession years and a spring New Year while the southern kingdom used accession years and a fall New Year had been discovered by Valerius Coucke of Belgium some years previously, a fact which Thiele acknowledges in his Mysterious Numbers.
Conclusions
Based on his conclusions, Thiele showed that the 14 years between Ahab and Jehu were really 12 years. This enabled him to date their reigns precisely, for Ahab is mentioned in the Kurk Stele which records the Assyrian advance into Syria/Israel at the Battle of Qarqar in 853 BC, and Jehu is mentioned on the Black Obelisk of Shalmaneser III paying tribute in 841 BC. As these two events are dated by Assyrian chronology as being 12 years apart, Ahab must have fought the Assyrians in his last year and Jehu paid tribute in his first year.
Thiele was able to reconcile the Biblical chronological data from the books of Kings and Chronicles with the exception of synchronisms between Hoshea of Israel and Hezekiah of Judah towards the end of the kingdom of Israel and reluctantly concluded that at that point the ancient authors had made a mistake. Oddly, it is at that precise point that he himself makes a mistake, by failing to realize that Hezekiah had a coregency with his father Ahaz, which explains the Hoshea/Hezekiah synchronisms. This correction has been supplied by subsequent writers who built on Thiele's work, including Thiele's colleague Siegfried Horn, T. C. Mitchell and Kenneth Kitchen, and Leslie McFall.
Thiele's method in arriving at his chronology has been contrasted with the analytical method employed by Julius Wellhausen and other scholars who follow some form of the documentary hypothesis. Wellhausen taught that the chronological data of the books of Kings and Chronicles were artificially put together at a date much later than the events they were ostensibly describing and were basically not historical. This was a necessary consequence of his a priori assumption that the biblical books as we have them today were the work of late-date editors who could not possibly have known the correct history of the times they were writing about. Theodore Robinson summarized this position as follows: "Wellhausen is surely right in believing that the synchronisms in Kings are worthless, being merely a late compilation from the actual figures given."
Wellhausen's methodology in interpreting the Scriptures and the history of Israel has therefore been classed by RK Harrison as a deductive approach; that is, one that starts with presuppositions and derives a historical reconstruction from those presuppositions. A necessary consequence of this approach has been that no general agreement has been reached on the chronology of the Hebrew kingdom period as calculated by authors who adopted this method. "The disadvantage of the deductive approach is that nothing is settled for certain; the results obtained are as diverse as the presuppositions of the scholars, since diverse presuppositions produce diverse results." In contrast, Thiele's method of determining the chronology of the Hebrew kings was based on induction, that is, making it a matter of first priority to determine the actual methods used by ancient scribes and court recorders in recording the years of kings, as described above. Thiele's inductive method, then, was based on inscriptional evidence from the ancient Near East, and not on the presuppositions followed by liberal scholarship. It is Thiele's method that has produced the determinative studies for the chronology of the kingdom period, not the presupposition-based method, so that even those interpreters who continue in late-date theories for the authorship of Scripture have recognized the credibility of Thiele's scholarship in determining the date for the division of the kingdom after the death of Solomon, as cited above. The work of Thiele and other textual scholars who have followed an inductive (evidence-based) approach is therefore significant in providing an alternative to the methods of the documentary hypothesis, and the success of that approach has been seen as theologically significant in supporting a high view of the inspiration of Scripture, particularly regarding its integrity in the abundant and complex historical data related to the kingdom period.
If the chronological data of the MT [ Masoretic text ] were not authentic—the actual dates and synchronisms for these various kings—then neither Thiele nor McFall nor anyone else could have constructed a chronology from them that in every case is faithful to the original texts and in every proven instance is consistent with Assyrian and Babylonian chronology. This mathematical demonstration should sit in judgment over the various theories of text formation: If a theory of text formation cannot explain how the chronological data of the MT has produced a chronology that in every respect seems authentic for the four centuries of the monarchic period, then that theory must be rejected as another example of a presupposition-based approach that cannot meet the rational criteria for credibility.
Chronology of the Hebrew kings according to Thiele's work
Reception
Thiele's chronological reconstruction has not been accepted by all scholars, nor has any other scholar's work in this field. Yet the work of Thiele and those who followed in his steps has achieved acceptance across a wider spectrum than that of any comparable chronology, so that Assyriologist DJ Wiseman wrote “The chronology most widely accepted today is one based on the meticulous study by Thiele,” and, more recently, Leslie McFall: “Thiele’s chronology is fast becoming the consensus view among Old Testament scholars, if it has not already reached that point.” Although criticism has been leveled at numerous specific points in his chronology, his work has won considerable praise even from those who disagree with his final conclusions. Nevertheless, even scholars sharing Thiele's religious convictions have maintained that there are weaknesses in his argument such as unfounded assumptions and assumed circular reasoning.
This citation, from a critic of Thiele's system, demonstrates the difference mentioned above between the deductive approach based on presuppositions and an inductive approach based on data, not a priori assumptions. Thiele is criticized here for basing his theories on data or evidence, not on presuppositions.
Despite these criticisms Thiele's methodological treatment remains the typical starting point of scholarly treatments of the subject, and his work is considered to have established the date of the division of the Israelite kingdom. This has found independent support in the work of J. Liver, Frank M. Cross, and others studying the chronology of the kings of Tyre. Thiele's work has found widespread recognition and use across various related scholarly disciplines. His date of 931 BCE, in conjunction with the synchronism between Rehoboam and Pharaoh Shishak in 1 Kings 14:25, is used by Egyptologists to give absolute dates to Egypt's 22nd Dynasty, and his work has also been used by scholars in other disciplines to establish Assyrian and Babylonian dates. Criticism of Thiele's reconstruction led to further research which has refined or even departed from his synthesis. Notable studies of this type include work by Tadmor and McFall.
Scholarly attitudes towards the Biblical record of the Israelite monarchies from the late nineteenth century to the mid-twentieth century were largely disparaging, treating the records as essentially fictional and dismissing the value of the regnal synchronisms. In contrast, modern scholarly attitudes to the monarchical chronology and synchronisms in 1 and 2 Kings has been far more positive subsequent to the work of Thiele and those who have developed his thesis further, a change in attitude to which recent archaeology has contributed.
References
External links
Tabular summary of Thiele's chronologies
1951 non-fiction books
20th-century history books
History books about Israel
History books about Judaism
Religious studies books
Chronology
Books about the ancient Near East | The Mysterious Numbers of the Hebrew Kings | [
"Physics"
] | 2,146 | [
"Spacetime",
"Chronology",
"Physical quantities",
"Time"
] |
22,287,094 | https://en.wikipedia.org/wiki/AC-262%2C536 | AC-262536 is a drug developed by Acadia Pharmaceuticals which acts as a selective androgen receptor modulator (SARM). Chemically it possesses endo-exo isomerism, with the endo form being the active form. It acts as a partial agonist for the androgen receptor with a Ki of 5nM, and no significant affinity for any other receptors tested. In animal studies it produced a maximal effect of around 66% of the levator ani muscle weight increase of testosterone, but only around 27% of its maximal effect on prostate gland weight. It is an aniline SARM related to ACP-105 and vosilasarm (RAD140).
References
Abandoned drugs
Naphthalenes
Nitriles
Secondary alcohols
Selective androgen receptor modulators
Tropanes | AC-262,536 | [
"Chemistry"
] | 170 | [
"Pharmacology",
"Drug safety",
"Functional groups",
"Medicinal chemistry stubs",
"Pharmacology stubs",
"Nitriles",
"Abandoned drugs"
] |
22,287,990 | https://en.wikipedia.org/wiki/C2H3NO | {{DISPLAYTITLE:C2H3NO}}
The molecular formula C2H3NO (molar mass: 57.05 g/mol, exact mass: 57.0215 u) may refer to:
Glycolonitrile, also called hydroacetolnitrile or formaldehyde cyanohydrin
Methyl isocyanate (MIC) | C2H3NO | [
"Chemistry"
] | 80 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
8,187,487 | https://en.wikipedia.org/wiki/Wick%27s%20theorem | Wick's theorem is a method of reducing high-order derivatives to a combinatorics problem. It is named after Italian physicist Gian Carlo Wick. It is used extensively in quantum field theory to reduce arbitrary products of creation and annihilation operators to sums of products of pairs of these operators. This allows for the use of Green's function methods, and consequently the use of Feynman diagrams in the field under study. A more general idea in probability theory is Isserlis' theorem.
In perturbative quantum field theory, Wick's theorem is used to quickly rewrite each time ordered summand in the Dyson series as a sum of normal ordered terms. In the limit of asymptotically free ingoing and outgoing states, these terms correspond to Feynman diagrams.
Definition of contraction
For two operators and we define their contraction to be
where denotes the normal order of an operator . Alternatively, contractions can be denoted by a line joining and , like .
We shall look in detail at four special cases where and are equal to creation and annihilation operators. For particles we'll denote the creation operators by and the annihilation operators by .
They satisfy the commutation relations for bosonic operators , or the anti-commutation relations for fermionic operators where denotes the Kronecker delta and denotes the identity operator.
We then have
where .
These relationships hold true for bosonic operators or fermionic operators because of the way normal ordering is defined.
Examples
We can use contractions and normal ordering to express any product of creation and annihilation operators as a sum of normal ordered terms. This is the basis of Wick's theorem. Before stating the theorem fully we shall look at some examples.
Suppose and are bosonic operators satisfying the commutation relations:
where , denotes the commutator, and is the Kronecker delta.
We can use these relations, and the above definition of contraction, to express products of and in other ways.
Example 1
Note that we have not changed but merely re-expressed it in another form as
Example 2
Example 3
In the last line we have used different numbers of symbols to denote different contractions. By repeatedly applying the commutation relations it takes a lot of work to express in the form of a sum of normally ordered products. It is an even lengthier calculation for more complicated products.
Luckily Wick's theorem provides a shortcut.
Statement of the theorem
A product of creation and annihilation operators can be expressed as
In other words, a string of creation and annihilation operators can be rewritten as the normal-ordered product of the string, plus the normal-ordered product after all single contractions among operator pairs, plus all double contractions, etc., plus all full contractions.
Applying the theorem to the above examples provides a much quicker method to arrive at the final expressions.
A warning: In terms on the right hand side containing multiple contractions care must be taken when the operators are fermionic. In this case an appropriate minus sign must be introduced according to the following rule: rearrange the operators (introducing minus signs whenever the order of two fermionic operators is swapped) to ensure the contracted terms are adjacent in the string. The contraction can then be applied (See "Rule C" in Wick's paper).
Example:
If we have two fermions () with creation and annihilation operators and () then
Note that the term with contractions of the two creation operators and of the two annihilation operators is not included because their contractions vanish.
Proof
We use induction to prove the theorem for bosonic creation and annihilation operators. The base case is trivial, because there is only one possible contraction:
In general, the only non-zero contractions are between an annihilation operator on the left and a creation operator on the right. Suppose that Wick's theorem is true for operators , and consider the effect of adding an Nth operator to the left of to form . By Wick's theorem applied to operators, we have:
is either a creation operator or an annihilation operator. If is a creation operator, all above products, such as , are already normal ordered and require no further manipulation. Because is to the left of all annihilation operators in , any contraction involving it will be zero. Thus, we can add all contractions involving to the sums without changing their value. Therefore, if is a creation operator, Wick's theorem holds for .
Now, suppose that is an annihilation operator. To move from the left-hand side to the right-hand side of all the
products, we repeatedly swap with the operator immediately right of it (call it ), each time applying to account for noncommutativity. Once we do this, all terms will be normal ordered. All terms added to the sums by pushing through the products correspond to additional contractions involving . Therefore, if is an annihilation operator, Wick's theorem holds for .
We have proved the base case and the induction step, so the theorem is true. By introducing the appropriate minus signs, the proof can be extended to fermionic creation and annihilation operators. The theorem applied to fields is proved in essentially the same way.
Wick's theorem applied to fields
The correlation function that appears in quantum field theory can be expressed by a contraction on the field operators:
where the operator are the amount that do not annihilate the vacuum state . Which means that . This means that is a contraction over . Note that the contraction of a time-ordered string of two field operators is a C-number.
In the end, we arrive at Wick's theorem:
The T-product of a time-ordered free fields string can be expressed in the following manner:
Applying this theorem to S-matrix elements, we discover that normal-ordered terms acting on vacuum state give a null contribution to the sum. We conclude that m is even and only completely contracted terms remain.
where p is the number of interaction fields (or, equivalently, the number of interacting particles) and n is the development order (or the number of vertices of interaction). For example, if
This is analogous to the corresponding Isserlis' theorem in statistics for the moments of a Gaussian distribution.
Note that this discussion is in terms of the usual definition of normal ordering which is appropriate for the vacuum expectation values (VEV's) of fields. (Wick's theorem provides as a way of expressing VEV's of n fields in terms of VEV's of two fields.) There are any other possible definitions of normal ordering, and Wick's theorem is valid irrespective. However Wick's theorem only simplifies computations if the definition of normal ordering used is changed to match the type of expectation value wanted. That is we always want the expectation value of the normal ordered product to be zero. For instance in
thermal field theory a different type of expectation value, a thermal trace over the density matrix, requires a different definition of normal ordering.
See also
Isserlis' theorem
References
Further reading
(§4.3)
(Chapter 13, Sec c)
Eponymous theorems of physics
Quantum field theory
Scattering theory | Wick's theorem | [
"Physics",
"Chemistry"
] | 1,497 | [
"Quantum field theory",
"Scattering theory",
"Equations of physics",
"Quantum mechanics",
"Eponymous theorems of physics",
"Scattering",
"Physics theorems"
] |
8,199,558 | https://en.wikipedia.org/wiki/Applied%20spectroscopy | Applied spectroscopy is the application of various spectroscopic methods for the detection and identification of different elements or compounds to solve problems in fields like forensics, medicine, the oil industry, atmospheric chemistry, and pharmacology.
Spectroscopic methods
A common spectroscopic method for analysis is Fourier transform infrared spectroscopy (FTIR), where chemical bonds can be detected through their characteristic infrared absorption frequencies or wavelengths. These absorption characteristics make infrared analyzers an invaluable tool in geoscience, environmental science, and atmospheric science. For instance, atmospheric gas monitoring has been facilitated by the development of commercially available gas analyzers which can distinguish between carbon dioxide, methane, carbon monoxide, oxygen, and nitric oxide.
Ultraviolet (UV) spectroscopy is used where strong absorption of UV radiation occurs in a substance. Such groups are known as chromophores and include aromatic groups, conjugated system of bonds, carbonyl groups and so on. Nuclear magnetic resonance spectroscopy detects hydrogen atoms in specific environments, and complements both infrared (IR) spectroscopy and UV spectroscopy. The use of Raman spectroscopy is growing for more specialist applications.
There are also derivative methods such as infrared microscopy, which allows very small areas to be analyzed in an optical microscope.
One method of elemental analysis that is important in forensic analysis is energy-dispersive X-ray spectroscopy (EDX) performed in the environmental scanning electron microscope (ESEM). The method involves analysis of back-scattered X-rays from the sample as a result of interaction with the electron beam. Automated EDX is further used in a range of automated mineralogy techniques for identification and textural mapping.
Sample preparation
In all three spectroscopic methods, the sample usually needs to be present in solution, which may present problems during forensic examination because it necessarily involves sampling solid from the object to be examined.
In FTIR, three types of samples can be analyzed: solution (KBr), powder, or film. A solid film is the easiest and most straight forward sample type to test.
Analysis of polymers
Many polymer degradation mechanisms can be followed using IR spectroscopy, such as UV degradation and oxidation, among many other failure modes.
UV degradation
Many polymers are attacked by UV radiation at vulnerable points in their chain structures. Thus, polypropylene suffers severe cracking in sunlight unless anti-oxidants are added. The point of attack occurs at the tertiary carbon atom present in every repeat unit, causing oxidation and finally chain breakage. Polyethylene is also susceptible to UV degradation, especially those variants that are branched polymers such as low-density polyethylene. The branch points are tertiary carbon atoms, so polymer degradation starts there and results in chain cleavage, and embrittlement. In the example shown at left, carbonyl groups were readily detected by IR spectroscopy from a cast thin film. The product was a road cone that had cracked in service, and many similar cones also failed because an anti-UV additive had not been used.
Oxidation
Polymers are susceptible to attack by atmospheric oxygen, especially at elevated temperatures encountered during processing to shape. Many process methods such as extrusion and injection moulding involve pumping molten polymer into tools, and the high temperatures needed for melting may result in oxidation unless precautions are taken. For example, a forearm crutch suddenly snapped and the user was severely injured in the resulting fall. The crutch had fractured across a polypropylene insert within the aluminium tube of the device, and IR spectroscopy of the material showed that it had oxidised, possibly as a result of poor moulding.
Oxidation is usually relatively easy to detect, owing to the strong absorption by the carbonyl group in the spectrum of polyolefins. Polypropylene has a relatively simple spectrum, with few peaks at the carbonyl position (like polyethylene). Oxidation tends to start at tertiary carbon atoms because free radicals here are more stable, so last longer and are attacked by oxygen. The carbonyl group can be further oxidised to break the chain, so weakening the material by lowering the molecular weight, and cracks start to grow in the regions affected.
Ozonolysis
The reaction occurring between double bonds and ozone is known as ozonolysis when one molecule of the gas reacts with the double bond:
The immediate result is formation of an ozonide, which then decomposes rapidly so that the double bond is cleaved. This is the critical step in chain breakage when polymers are attacked. The strength of polymers depends on the chain molecular weight or degree of polymerization: The higher the chain length the greater the mechanical strength (such as tensile strength). By cleaving the chain, the molecular weight drops rapidly and there comes a point when it has little strength whatsoever, and a crack forms. Further attack occurs in the freshly exposed crack surfaces and the crack grows steadily until it completes a circuit and the product separates or fails. In the case of a seal or a tube, failure occurs when the wall of the device is penetrated.
The carbonyl end groups that are formed are usually aldehydes or ketones, which can oxidise further to carboxylic acids. The net result is a high concentration of elemental oxygen on the crack surfaces, which can be detected using EDX in the ESEM. For example, two EDX spectra were obtained during an investigation into ozone cracking of diaphragm seals in a semiconductor fabrication factory. The EDX spectrum of the crack surface shows the high-oxygen peak compared with a constant sulfur peak. In contrast, the EDX spectrum of the unaffected elastomer surface spectrum shows a relatively low-oxygen peak compared with the sulfur peak.
See also
References
Forensic Materials Engineering: Case Studies by Peter Rhys Lewis, Colin Gagg, Ken Reynolds, CRC Press (2004).
Peter R Lewis and Sarah Hainsworth, Fuel Line Failure from stress corrosion cracking, Engineering Failure Analysis,13 (2006) 946-962.
J. Workman and Art Springsteen (Eds.), Applied Spectroscopy: A Compact Reference for Practitioners, Academic Press (1998) .
Polymer chemistry
Spectroscopy
Analytical chemistry | Applied spectroscopy | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,248 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Materials science",
"Polymer chemistry",
"nan",
"Spectroscopy"
] |
23,787,036 | https://en.wikipedia.org/wiki/Pl%C3%BCcker%27s%20conoid | In geometry, Plücker's conoid is a ruled surface named after the German mathematician Julius Plücker. It is also called a conical wedge or cylindroid; however, the latter name is ambiguous, as "cylindroid" may also refer to an elliptic cylinder.
Plücker's conoid is the surface defined by the function of two variables:
This function has an essential singularity at the origin.
By using cylindrical coordinates in space, we can write the above function into parametric equations
Thus Plücker's conoid is a right conoid, which can be obtained by rotating a horizontal line about the with the oscillatory motion (with period 2π) along the segment of the axis (Figure 4).
A generalization of Plücker's conoid is given by the parametric equations
where denotes the number of folds in the surface. The difference is that the period of the oscillatory motion along the is . (Figure 5 for )
See also
Ruled surface
Right conoid
Wallis's conical edge
References
A. Gray, E. Abbena, S. Salamon, Modern differential geometry of curves and surfaces with Mathematica, 3rd ed. Boca Raton, Florida:CRC Press, 2006. ()
Vladimir Y. Rovenskii, Geometry of curves and surfaces with MAPLE ()
External links
Surfaces
Geometric shapes
Eponyms in geometry | Plücker's conoid | [
"Mathematics"
] | 290 | [
"Eponyms in geometry",
"Geometric shapes",
"Mathematical objects",
"Geometric objects",
"Geometry"
] |
23,788,022 | https://en.wikipedia.org/wiki/Tape%20correction%20%28surveying%29 | In surveying, tape correction(s) refer(s) to correcting measurements for the effect of slope angle, expansion or contraction due to temperature, and the tape's sag, which varies with the applied tension. Not correcting for these effects gives rise to systematic errors, i.e. effects which act in a predictable manner and therefore can be corrected by mathematical methods.
Correction due to slope
Where
L= Inclined length measured
A= Inclined angle
When distances are measured along the slope, the equivalent horizontal distance may be determined by applying a slope correction.
The vertical slope angle of the length measured must be measured. (Refer to the figure on the other side) Thus,
For gentle slopes,
For steep slopes,
For very steep slopes,
,
Or, more simply,
Where:
is the correction of measured slope distance due to slope;
is the angle between the measured slope line and horizontal line;
s is the measured slope distance.
d is the horizontal distance.
The correction is subtracted from to obtain the equivalent horizontal distance on the slope line:
Correction due to temperature
When measuring or laying out distances, the standard temperature of the tape and the temperature of the tape at time of measurement are usually different. A difference in temperature will cause the tape to lengthen or shorten, so the measurement taken will not be exactly correct. A correction can be applied to the measured length to obtain the correct length.
The correction of the tape length due to change in temperature is given by:
Where:
is the correction to be applied to the tape due to temperature;
C is the coefficient of thermal expansion of the metal that forms the tape;
L is the length of the tape or length of the line measured.
is the observed temperature of the tape at the time of measurement;
is the standard temperature, when the tape is at the correct length, often 20 °C;
The correction is added to to obtain the corrected distance:
For common tape measurements, the tape used is a steel tape with coefficient of thermal expansion C equal to 0.000,011,6 units per unit length per degree Celsius change. This means that the tape changes length by 1.16 mm per 10 m tape per 10 °C change from the standard temperature of the tape. For a 30 meter long tape with standard temperature of 20 °C used at 40 °C, the change in length is 7 mm over the length of the tape.
Correction due to sag
A tape not supported along its length will sag and form a catenary between end supports. According to the section of tension correction some tapes are calibrated for sag at standard tension. These tapes will require complex sag and tension corrections if used at non-standard tensions. The correction due to sag must be calculated separately for each unsupported stretch separately and is given by:
Where:
is the correction applied to the tape due to sag; meters;
is the weight of the tape per unit length; newtons per meter;
L is the length between the two ends of the catenary; meters;
P is the tension or pull applied to the tape; newtons.
A tape held in catenary will record a value larger than the correct measurement. Thus, the correction is subtracted from to obtain the corrected distance:
Note that the weight of the tape per unit length is equal to the weight of the tape divided by the length of the tape:
so:
Therefore, we can rewrite the formula for correction due to sag as:
Derivation (sag)
The general formula for a catenary formed by a tape supported only at its ends is
.
Here, is the gravitational acceleration. The arc length between two support points at and is found by usual methods via integration:
For convenience set . The integrand is simplified as follows using hyperbolic function identities:
The tape length is then found by integrating:
Now the correction for tape sag is the difference between the actual span between the supports, , and the arc length of the tape's catenary, . Call this correction . The absolute value of this correction is above, the amount you would subtract from the tape measurement to get the true span distance.
A Taylor series expansion of in terms of the quantity is desired to give a good first approximation to the correction. In fact, the first nonvanishing term in the Taylor series is cubic in , and the next nonvanishing term is to the fifth power of L; thus, a series expansion for is reasonable. To this end, we need to find an expression for that contains but not . We already have an expression for in terms of , but now need to find the inverse function (for in terms of ):
Evaluating at yields zero, so there is no zero-order term in the Taylor series. The first derivative of this function with respect to L is
.
Evaluated at L=0, it vanishes and so does not contribute a Taylor series term. The second derivative of is
.
Again, when evaluated at L=0 it vanishes. When evaluated at L=0, the third derivative survives, however.
Thus, the first surviving term in the Taylor series is:
Notice that the variable here is the tension on the cable, whereas above, is the mass whose gravitational force (mass times gravitational acceleration) equals the tension on the cable. The only conversion necessary then is to take here and equate it to above. Also, this formula is the tape sag correction to be added to the measured distance, so the negative sign in front can be removed and the tape sag correction can be made instead by subtracting the absolute value as is done in the preceding section.
Correction due to tension
Some tapes are already calibrated to account for the sag at a standard tension. In this case, errors arise when the tape is pulled at a Tension which differs from the standard tension used at standardization. The tape will pulled less than its standard length when a tension less than the standard tension is applied, making the tape too long.
A tape stretches in an elastic manner until it reaches its elastic limit, when it will deform permanently and ruin the tape.
The correction due to tension is given by:
Where:
is the elongation in tape length due to pull; or the correction to be applied due to applying a tension which differs from standard tension; meters;
is the tension applied to the tape during measurement; newtons;
is the standard tension, when the tape is the correct length, often 50 newtons; newtons;
L is the measured or erroneous length of the line; meters
A is the cross-sectional area of the tape; square centimeters;
E is the modulus of elasticity of the tape material; newtons per square centimeter;
The correction is added to to obtain the corrected distance:
The value for A is given by:
Where:
W is the total weight of the tape; kilograms;
is the unit weight of the tape; kilograms per cubic centimeter.
For steel tapes, the value for is .
Correction due to incorrect tape length
Manufacturers of measuring tapes do not usually guarantee the exact length of tapes, and standardization is a process where a standard temperature and tension are determined at which the tape is the exact length. The nominal length of tapes can be affected by physical imperfections, stretching or wear. Constant use of tapes cause wear, tapes can become kinked and may be improperly repaired when breaks occur.
The correction due to tape length is given by:
Where:
CL is the corrected length of the line to be measured or laid out;
ML is the measured length or length to be laid out;
NL is the nominal length of the tape as specified by its mark;
KL is a known length;
Corr is the ratio of measured to actual length , determined by measuring a known length.
In the U.S., some tapes come with United States Bureau of Standards certifications establishing the correction needed per 100' of tape.
Note that incorrect tape length introduces a systematic error that must be calibrated periodically.
See also
Local attraction
References
Mostly in pdf:
Originally published by Baguio Research and Publishing Center, Baguio, Philippines in 1981.
Surveying
Measurement
Civil engineering | Tape correction (surveying) | [
"Physics",
"Mathematics",
"Engineering"
] | 1,651 | [
"Physical quantities",
"Quantity",
"Measurement",
"Size",
"Construction",
"Surveying",
"Civil engineering"
] |
23,789,332 | https://en.wikipedia.org/wiki/2%2C3%2C7%2C8-Tetrachlorodibenzodioxin | 2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCDD) is a polychlorinated dibenzo-p-dioxin (sometimes shortened, though inaccurately, to simply 'dioxin') with the chemical formula CHClO. Pure TCDD is a colorless solid with no distinguishable odor at room temperature. It is usually formed as an unwanted product in burning processes of organic materials or as a side product in organic synthesis.
TCDD is the most potent compound (congener) of its series (polychlorinated dibenzodioxins, known as PCDDs or simply dioxins) and became known as a contaminant in Agent Orange, an herbicide used in the Vietnam War. TCDD was released into the environment in the Seveso disaster. It is a persistent organic pollutant.
Biological activity in humans and animals
TCDD and dioxin-like compounds act via a specific receptor present in all cells: the aryl hydrocarbon (AH) receptor. This receptor is a transcription factor which is involved in the expression of genes; it has been shown that high doses of TCDD either increase or decrease the expression of several hundred genes in rats. Genes of enzymes activating the breakdown of foreign and often toxic compounds are classic examples of such genes (enzyme induction). TCDD increases the enzymes breaking down, e.g., carcinogenic polycyclic hydrocarbons such as benzo(a)pyrene.
These polycyclic hydrocarbons also activate the AH receptor, but less than TCDD and only temporarily. Even many natural compounds present in vegetables cause some activation of the AH receptor. This phenomenon can be viewed as adaptive and beneficial, because it protects the organism from toxic and carcinogenic substances. Excessive and persistent stimulation of AH receptor, however, leads to a multitude of adverse effects.
The physiological function of the AH receptor has been the subject of continuous research. One obvious function is to increase the activity of enzymes breaking down foreign chemicals or normal chemicals of the body as needed. There seem to be many other functions, however, related to the development of various organs and the immune systems or other regulatory functions. The AH receptor is phylogenetically highly conserved, with a history of at least 600 million years, and is found in all vertebrates. Its ancient analogs are important regulatory proteins even in more primitive species. In fact, knock-out animals with no AH receptor are prone to illness and developmental problems. Taken together, this implies the necessity of a basal degree of AH receptor activation to achieve normal physiological function.
Toxicity in humans
In 2000, the Expert Group of the World Health Organization considered developmental toxicity as the most pertinent risk of dioxins to human beings. Because people are usually exposed simultaneously to several dioxin-like chemicals, a more detailed account is given at dioxins and dioxin-like compounds.
Developmental effects
In Vietnam and the United States, teratogenic or birth defects were observed in children of people who were exposed to Agent Orange or 2,4,5-T that contained TCDD as an impurity out of the production process. However, there has been some uncertainty on the causal link between Agent Orange/dioxin exposure. In 2006, a meta-analysis indicated large amount of heterogeneity between studies and emphasized a lack of consensus on the issue. Stillbirths, cleft palate, and neural tube defects, with spina bifida were the most statistically significant defects. Later some tooth defects and borderline neurodevelopmental effects have been reported. After the Seveso accident, tooth development defects, changed sex ratio and decreased sperm quality have been noted. Various developmental effects have been clearly shown after high mixed exposures to dioxins and dioxin-like compounds, the most dramatic in Yusho and Yu-chen catastrophes, in Japan and Taiwan, respectively.
Cancer
It is largely agreed that TCDD is not directly mutagenic or genotoxic. Its main action is cancer promotion; it promotes the carcinogenicity initiated by other compounds. Very high doses may, in addition, cause cancer indirectly; one of the proposed mechanisms is oxidative stress and the subsequent oxygen damage to DNA. There are other explanations such as endocrine disruption or altered signal transduction. The endocrine disrupting activities seem to be dependent on life stage, being anti-estrogenic when estrogen is present (or in high concentration) in the body, and estrogenic in the absence of estrogen.
TCDD was classified by the International Agency for Research on Cancer (IARC) as a carcinogen for humans (group 1). In the occupational cohort studies available for the classification, the risk was weak and borderline detectable, even at very high exposures. Therefore, the classification was, in essence, based on animal experiments and mechanistic considerations. This was criticized as a deviation from IARC's 1997 classification rules. The main problem with IARC classification is that it only assesses qualitative hazard, i.e. carcinogenicity at any dose, and not the quantitative risk at different doses. According to a 2006 Molecular Nutrition & Food Research article, there were debates on whether TCDD was carcinogenic only at high doses which also cause toxic damage of tissues. A 2011 review concluded that, after 1997, further studies did not support an association between TCDD exposure and cancer risk. One of the problems is that in all occupational studies the subjects have been exposed to a large number of chemicals, not only TCDD. By 2011, it was reported that studies that include the update of Vietnam veteran studies from Operation Ranch Hand, had concluded that after 30 years the results did not provide evidence of disease. On the other hand, the latest studies on Seveso population support TCDD carcinogenicity at high doses.
In 2004, an article in the International Journal of Cancer provided some direct epidemiological evidence that TCDD or other dioxins are not causing soft-tissue sarcoma at low doses, although this cancer has been considered typical for dioxins. There was in fact a trend of cancer to decrease. This is called a J-shape dose-response, low doses decrease the risk, and only higher doses increase the risk, according to a 2005 article in the journal Dose-Response.
Safety recommendations
The Joint FAO/WHO Expert Committee on Food Additives (JECFA) derived in 2001 a provisional tolerable monthly intake (PTMI) of 70 pg TEQ/kg body weight. The United States Environmental Protection Agency (EPA) established an oral reference dose (RfD) of 0.7 pg/kg b.w. per day for TCDD (see discussion on the differences in).
According to the Aspen Institute, in 2011:The general environmental limit in most countries is 1,000 ppt TEq in soils and 100 ppt in sediment. Most industrialized countries have dioxin concentrations in soils of less than 12 ppt. The U.S. Agency for Toxic Substance and Disease Registry has determined that levels higher than 1,000 ppt TEq in soil require intervention, including research, surveillance, health studies, community and physician education, and exposure investigation. The EPA is considering reducing these limits to 72 ppt TEq. This change would significantly increase the potential volume of contaminated soil requiring treatment.
Animal toxicology
By far most information on toxicity of dioxin-like chemicals is based on animal studies utilizing TCDD. Almost all organs are affected by high doses of TCDD. In short-term toxicity studies in animals, the typical effects are anorexia and wasting, and even after a huge dose animals die only 1 to 6 weeks after the TCDD administration. Seemingly similar species have varying sensitivities to acute effects: lethal dose for a guinea pig is about 1 μg/kg, but to a hamster it is more than 1,000 μg/kg. A similar difference can be seen even between two different rat strains. Various hyperplastic (overgrowth) or atrophic (wasting away) responses are seen in different organs, thymus atrophy is very typical in several animal species. TCDD also affects the balance of several hormones. In some species, but not in all, severe liver toxicity is seen. Taking into account the low doses of dioxins in the present human population, only two types of toxic effects have been considered to cause a relevant risk to humans: developmental effects and cancer.
Developmental effects
Developmental effects occur at very low doses in animals. They include frank teratogenicity such as cleft palate and hydronephrosis. Development of some organs may be even more sensitive: very low doses perturb the development of sexual organs in rodents, and the development of teeth in rats. The latter is important in that tooth deformities were also seen after the Seveso accident and possibly after a long breast-feeding of babies in the 1970s and 1980s when the dioxin concentrations in Europe were about ten times higher than at present.
Cancer
Cancers can be induced in animals at many sites. At sufficiently high doses, TCDD has caused cancer in all animals tested. The most sensitive is liver cancer in female rats, and this has long been a basis for risk assessment. Dose-response of TCDD in causing cancer does not seem to be linear, and there is a threshold below which it seems to cause no cancer. TCDD is not mutagenic or genotoxic, in other words, it is not able to initiate cancer, and the cancer risk is based on promotion of cancer initiated by other compounds or on indirect effects such as disturbing defense mechanisms of the body e.g. by preventing apoptosis or programmed death of altered cells. Carcinogenicity is associated with tissue damage, and it is often viewed now as secondary to tissue damage.
TCDD may in some conditions potentiate the carcinogenic effects of other compounds. An example is benzo(a)pyrene that is metabolized in two steps, oxidation and conjugation. Oxidation produces epoxide carcinogens that are rapidly detoxified by conjugation, but some molecules may escape to the nucleus of the cell and bind to DNA causing a mutation, resulting in cancer initiation. When TCDD increases the activity of oxidative enzymes more than conjugation enzymes, the epoxide intermediates may increase, increasing the possibility of cancer initiation. Thus, a beneficial activation of detoxifying enzymes may lead to deleterious side effects.
Sources
TCDD has never been produced commercially except as a pure chemical for scientific research. It is, however, formed as a synthesis side product when producing certain chlorophenols or chlorophenoxy acid herbicides. It may also be formed along with other polychlorinated dibenzodioxins and dibenzofuranes in any burning of hydrocarbons where chlorine is present, especially if certain metal catalysts such as copper are also present. Usually a mixture of dioxin-like compounds is produced, therefore a more thorough treatise is under dioxins and dioxin-like compounds.
The greatest production occurs from waste incineration, metal production, and fossil-fuel and wood combustion. Dioxin production can usually be reduced by increasing the combustion temperature. Total U.S. emissions of PCCD/Fs were reduced from ca. 14 kg TEq in 1987 to 1.4 kg TEq in 2000.
History
TCDD was first synthesized in the laboratory in 1957 by Wilhelm Sandermann, and he also discovered the effects of the compound.
Cases of exposure
There have been numerous incidents where people have been exposed to high doses of TCDD.
In 1953, an accident occurred at BASF during the chlorination of diphenyl oxides, as a result of which several workers developed severe chloracne. Similar cases had occurred 6 years earlier in the USA and in 1952, 1954 and 1956 at the Boehringer Ingelheim company.
In 1976, thousands of inhabitants of Seveso, Italy were exposed to TCDD after an accidental release of several kilograms of TCDD from a pressure tank. Many animals died, and high concentrations of TCDD, up to 56,000 pg/g of fat, were noted especially in children playing outside and eating local food. The acute effects were limited to about 200 cases of chloracne. Long-term effects seem to include a slight excess of multiple myeloma and myeloid leukaemia, as well as some developmental effects such as disturbed development of teeth and excess of girls born to fathers who were exposed as children. Several other long-term effects have been suspected, but the evidence is not very strong.
In Times Beach, Missouri, several hundred people were poisoned by extremely high concentrations of TCDD by Russell Martin Bliss, who sprayed TCDD-contaminated waste oil on dusty roads to avoid large dust clouds. Bliss himself obtained the waste oil from NEPACCO, a company that produced Agent Orange. No one was ever charged in relation to the incident, and the city of Times Beach was abandoned and disincorporated following an investigation by the CDC and EPA. This is marked as the single largest contamination of a civilian area by TCDD in United States history.
In Vienna, two women were poisoned at their workplace in 1997, and the measured concentrations in one of them were the highest ever measured in a human being, 144,000 pg/g of fat. This is about 100,000 times the concentrations in most people today and about 10,000 times the sum of all dioxin-like compounds in young people today. They survived but suffered from difficult chloracne for several years. The poisoning likely happened in October 1997 but was not discovered until April 1998. At the institute where the women worked as secretaries, high concentrations of TCDD were found in one of the labs, suggesting that the compound had been produced there. The police investigation failed to find clear evidence of crime, and no one was ever prosecuted. Aside from malaise and amenorrhea there were few other symptoms or abnormal laboratory findings.
In 2004, presidential candidate Viktor Yushchenko of Ukraine was poisoned with a large dose of TCDD. His blood TCDD concentration was measured 108,000 pg/g of fat, which is the second highest ever measured. This concentration implies a dose exceeding 2 mg, or 25 μg/kg of body weight. He suffered from chloracne for many years, but after initial malaise, other symptoms or abnormal laboratory findings were few.
An area of polluted land in Italy, known as the Triangle of Death, is contaminated with TCDD from years of illegal waste disposal by organized crime.
See also
Dioxins and dioxin-like compounds
Toxic Equivalency
References
External links
U.S. National Library of Medicine: Hazardous Substances Databank – 2,3,7,8-Tetrachlorodibenzodioxin
Dioxin synopsis
Dioxins
CDC – NIOSH Pocket Guide to Chemical Hazards
Chloroarenes
Dibenzodioxins
IARC Group 1 carcinogens
Blood agents | 2,3,7,8-Tetrachlorodibenzodioxin | [
"Chemistry"
] | 3,133 | [
"Blood agents",
"Chemical weapons"
] |
23,795,049 | https://en.wikipedia.org/wiki/Biological%20indicator%20evaluation%20resistometer | A Biological Indicator Evaluation Resistometer (BIER) vessel is a piece of equipment used to determine the time taken to reduce survival of a given organism by 90% (also known as a log 1 reduction). The name derives from how the equipment is used.
A BIER vessel evaluates the resistance of biological indicators to moist heat (steam) sterilization. For example, if a 90% reduction is determined to be 5 minutes for the microorganism being evaluated, then a D-value of 5 is assigned. D values are specific to starting bioload, substrate (the material the spores are on), and microbe species.
BIER vessels typically cost in excess of $100,000, and thus tend to be located where biological indicators are manufactured.
References
Microbiology
Antiseptics | Biological indicator evaluation resistometer | [
"Chemistry",
"Biology"
] | 164 | [
"Microbiology",
"Microscopy"
] |
23,795,093 | https://en.wikipedia.org/wiki/Croconic%20acid | Croconic acid (also known as 4,5-dihydroxycyclopentenetrione, crocic acid or pentagonic acid) is a chemical compound with formula or . It has a cyclopentene backbone with two hydroxyl groups adjacent to the double bond and three ketone groups on the remaining carbon atoms. It is sensitive to light, soluble in water and ethanol and forms yellow crystals that decompose at 212 °C.
The compound is acidic and loses the protons from the hydroxyl groups (pKa1 = and pKa2 = at 25 °C). The resulting anions, hydrogencroconate and croconate are also quite stable. The croconate ion, in particular, is aromatic and symmetric, as the double bond and the negative charges become delocalized over the five CO units (with two electrons, Hückel's rule means this is an aromatic configuration). The lithium, sodium and potassium croconates crystallize from water as dihydrates but the orange potassium salt can be dehydrated to form a monohydrate. The croconates of ammonium, rubidium and caesium crystallize in the anhydrous form. Salts of barium, lead, silver, and others are also known.
Croconic acid also forms ethers such as dimethyl croconate where the hydrogen atom of the hydroxyl group is substituted with an alkyl group.
History
Croconic acid and potassium croconate dihydrate were discovered by Leopold Gmelin in 1825, who named the compounds from Greek
meaning "crocus" or "egg yolk". The structure of ammonium croconate was determined by Baenziger et al. in 1964. The structure of was determined by Dunitz in 2001.
Structure
In the solid state, croconic acid has a peculiar structure consisting of pleated strips, each "page" of the strip being a planar ring of 4 molecules of held together by hydrogen bonds. In dioxane it has a large dipole moment of
9–10 D, while the free molecule is estimated to have a dipole of 7–7.5 D. The solid is ferroelectric with a Curie point above , indeed the organic crystal with the highest spontaneous polarization (about 20 μC/cm2). This is due to proton transfer between adjacent molecules in each pleated sheet, rather than molecular rotation.
In the solid alkali metal salts, the croconate anions and the alkali cations form parallel columns. In the mixed salt , which formally contains both one croconate dianion and one hydrogencroconate monoanion (), the hydrogen is shared equally by two adjacent croconate units.
Salts of the croconate anion and its derivatives are of interest in supramolecular chemistry research because of their potential for π-stacking effects, where the delocalized electrons of two stacked croconate anions interact.
Infrared and Raman assignments indicate that the equalization of the carbon–carbon bond lengths, thus the electronic delocalization, follows with an increase in counter-ion size for salts. This result leads to a further interpretation that the degree of aromaticity is enhanced for salts as a function of the size of the counter-ion. The same study provided quantum mechanical DFT calculations for the optimized structures and vibrational spectra which were in agreement with experimental findings. The values for calculated theoretical indices of aromaticity also increased with counterion size.
The croconate anion forms hydrated crystalline coordination compounds with divalent cations of transition metals, with general formula ; where M stands for copper (yielding a brown solid), iron (dark purple), zinc (yellow), nickel (green), manganese (dark green), or cobalt (purple). These complexes all have the same orthorhombic crystal structure, consisting of chains of alternating croconate and metal ions. Each croconate is bound to the preceding metal by one oxygen atom, and to the next metal through its two opposite oxygens, leaving two oxygens unbound. Each metal is bound to three croconate oxygens and to one water molecule. Calcium also forms a compound with the same formula (yellow) but the structure appears to be different.
The croconate anion also forms compounds with trivalent cations such as aluminium (yellow), chromium (brown), and iron (purple). These compounds also include hydroxyl groups as well as hydration water and have a more complicated crystal structure. No indication was found of sandwich-type bonds between the delocalized electrons and the metal (as are seen in ferrocene, for example), but the anion can form metal complexes with a large variety of bonding patterns, involving from only one to all five of its oxygen atoms.
See also
Croconate violet
Croconate blue
Rhodizonic acid
Squaric acid
Deltic acid
Cyclopentanepentone (leuconic acid)
References
External links
Cyclopentenes
Triketones
Enediols
Organic acids | Croconic acid | [
"Chemistry"
] | 1,068 | [
"Organic acids",
"Acids",
"Organic compounds"
] |
23,795,258 | https://en.wikipedia.org/wiki/Pyrethrin%20I | Pyrethrin I is one of the two pyrethrins, natural organic compounds with potent insecticidal activity. It is an ester of (+)-trans-chrysanthemic acid with (S)-(Z)-pyrethrolone.
Total synthesis
The synthesis of pyrethrin I involves the esterification of (+)-trans-chrysanthemic acid with (S)-(Z)-pyrethrolone. One synthetic method for each of these is shown in the images below. Sobti and Dev of the Malti-Chem Research Centre in Nadesari, vadodara, India published this method for chrysanthemic acid in 1974. The starting material for the synthesis uses commercially available (+)-3α, 4α-epoxycarane (1). A lactone is eventually formed and the ring is opened by the use of a Grignard reagent to give (+)-trans-chrysanthemic acid. The preparation of (S)-pyrethrolone is essentially a 2 step synthesis. The starting material (S)-4-hydroxy-3-methyl-2-(2-propynyl)-2-cyclopenten-1-one (7) is also commercially available as the alcohol moiety of ETOC. Tetrakis(triphenylphosphine)palladium(0), copper(I) iodide, triethylamine, and vinyl bromide are added to (7) to add two more carbons and form (8). The final step is the addition of an activated zinc compound to reduce the triple carbon bond to form the cis product, (S)-pyrethrolone (9). Although no journal articles specify the combining of the alcohol and acid moieties of pyrethrin I, they could be combined through an esterification process to form the wanted product.
Synthesis of the acid moiety
Synthesis of the alcohol moiety
References
Allethrins
Total synthesis | Pyrethrin I | [
"Chemistry"
] | 436 | [
"Total synthesis",
"Chemical synthesis"
] |
10,570,298 | https://en.wikipedia.org/wiki/Pr%C3%A9kopa%E2%80%93Leindler%20inequality | In mathematics, the Prékopa–Leindler inequality is an integral inequality closely related to the reverse Young's inequality, the Brunn–Minkowski inequality and a number of other important and classical inequalities in analysis. The result is named after the Hungarian mathematicians András Prékopa and László Leindler.
Statement of the inequality
Let 0 < λ < 1 and let f, g, h : Rn → [0, +∞) be non-negative real-valued measurable functions defined on n-dimensional Euclidean space Rn. Suppose that these functions satisfy
for all x and y in Rn. Then
Essential form of the inequality
Recall that the essential supremum of a measurable function f : Rn → R is defined by
This notation allows the following essential form of the Prékopa–Leindler inequality: let 0 < λ < 1 and let f, g ∈ L1(Rn; [0, +∞)) be non-negative absolutely integrable functions. Let
Then s is measurable and
The essential supremum form was given by Herm Brascamp and Elliott Lieb. Its use can change the left side of the inequality. For example, a function g that takes the value 1 at exactly one point will not usually yield a zero left side in the "non-essential sup" form but it will always yield a zero left side in the "essential sup" form.
Relationship to the Brunn–Minkowski inequality
It can be shown that the usual Prékopa–Leindler inequality implies the Brunn–Minkowski inequality in the following form: if 0 < λ < 1 and A and B are bounded, measurable subsets of Rn such that the Minkowski sum (1 − λ)A + λB is also measurable, then
where μ denotes n-dimensional Lebesgue measure. Hence, the Prékopa–Leindler inequality can also be used to prove the Brunn–Minkowski inequality in its more familiar form: if 0 < λ < 1 and A and B are non-empty, bounded, measurable subsets of Rn such that (1 − λ)A + λB is also measurable, then
Applications in probability and statistics
Log-concave distributions
The Prékopa–Leindler inequality is useful in the theory of log-concave distributions, as it can be used to show that log-concavity is preserved by marginalization and independent summation of log-concave distributed random variables. Since, if have pdf , and are independent, then is the pdf of , we also have that the convolution of two log-concave functions is log-concave.
Suppose that H(x,y) is a log-concave distribution for (x,y) ∈ Rm × Rn, so that by definition we have
and let M(y) denote the marginal distribution obtained by integrating over x:
Let y1, y2 ∈ Rn and 0 < λ < 1 be given. Then equation () satisfies condition () with h(x) = H(x,(1 − λ)y1 + λy2), f(x) = H(x,y1) and g(x) = H(x,y2), so the Prékopa–Leindler inequality applies. It can be written in terms of M as
which is the definition of log-concavity for M.
To see how this implies the preservation of log-convexity by independent sums, suppose that X and Y are independent random variables with log-concave distribution. Since the product of two log-concave functions is log-concave, the joint distribution of (X,Y) is also log-concave. Log-concavity is preserved by affine changes of coordinates, so the distribution of (X + Y, X − Y) is log-concave as well. Since the distribution of X+Y is a marginal over the joint distribution of (X + Y, X − Y), we conclude that X + Y has a log-concave distribution.
Applications to concentration of measure
The Prékopa–Leindler inequality can be used to prove results about concentration of measure.
Theorem Let , and set . Let denote the standard Gaussian pdf, and its associated measure. Then .
The proof of this theorem goes by way of the following lemma:
Lemma In the notation of the theorem, .
This lemma can be proven from Prékopa–Leindler by taking and . To verify the hypothesis of the inequality, , note that we only need to consider , in which case . This allows us to calculate:
Since , the PL-inequality immediately gives the lemma.
To conclude the concentration inequality from the lemma, note that on , , so we have . Applying the lemma and rearranging proves the result.
References
Further reading
Geometric inequalities
Integral geometry
Real analysis
Theorems in analysis
Theorems in measure theory | Prékopa–Leindler inequality | [
"Mathematics"
] | 1,031 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Theorems in measure theory",
"Mathematical theorems",
"Geometric inequalities",
"Inequalities (mathematics)",
"Theorems in geometry",
"Mathematical problems"
] |
10,571,004 | https://en.wikipedia.org/wiki/Biological%20network%20inference | Biological network inference is the process of making inferences and predictions about biological networks. By using these networks to analyze patterns in biological systems, such as food-webs, we can visualize the nature and strength of these interactions between species, DNA, proteins, and more.
The analysis of biological networks with respect to diseases has led to the development of the field of network medicine. Recent examples of application of network theory in biology include applications to understanding the cell cycle as well as a quantitative framework for developmental processes. Good network inference requires proper planning and execution of an experiment, thereby ensuring quality data acquisition. Optimal experimental design in principle refers to the use of statistical and or mathematical concepts to plan for data acquisition. This must be done in such a way that the data information content is enriched, and a sufficient amount of data is collected with enough technical and biological replicates where necessary.
Steps
The general cycle to modeling biological networks is as follows:
Prior knowledge
Involves a thorough literature and database search or seeking an expert's opinion.
Model selection
A formalism to model your system, usually an ordinary differential equation, boolean network, or Linear regression models, e.g. Least-angle regression, by Bayesian network or based on Information theory approaches. it can also be done by the application of a correlation-based inference algorithm, as will be discussed below, an approach which is having increased success as the size of the available microarray sets keeps increasing
Hypothesis/assumptions
Experimental design
Data acquisition
Ensure that high quality data is collected with all the required variables being measured
Network inference
This process is mathematical rigorous and computationally costly.
Model refinement
Cross-check how well the results meet the expectations. The process is terminated upon obtaining a good model fit to data, otherwise, there is need for model re-adjustment.
Biological networks
A network is a set of nodes and a set of directed or undirected edges between the nodes. Many types of biological networks exist, including transcriptional, signalling and metabolic. Few such networks are known in anything approaching their complete structure, even in the simplest bacteria. Still less is known on the parameters governing the behavior of such networks over time, how the networks at different levels in a cell interact, and how to predict the complete state description of a eukaryotic cell or bacterial organism at a given point in the future. Systems biology, in this sense, is still in its infancy .
There is great interest in network medicine for the modelling biological systems. This article focuses on inference of biological network structure using the growing sets of high-throughput expression data for genes, proteins, and metabolites. Briefly, methods using high-throughput data for inference of regulatory networks rely on searching for patterns of partial correlation or conditional probabilities that indicate causal influence. Such patterns of partial correlations found in the high-throughput data, possibly combined with other supplemental data on the genes or proteins in the proposed networks, or combined with other information on the organism, form the basis upon which such algorithms work. Such algorithms can be of use in inferring the topology of any network where the change in state of one node can affect the state of other nodes.
Transcriptional regulatory networks
Genes are the nodes and the edges are directed. A gene serves as the source of a direct regulatory edge to a target gene by producing an RNA or protein molecule that functions as a transcriptional activator or inhibitor of the target gene. If the gene is an activator, then it is the source of a positive regulatory connection; if an inhibitor, then it is the source of a negative regulatory connection. Computational algorithms take as primary input data measurements of mRNA expression levels of the genes under consideration for inclusion in the network, returning an estimate of the network topology. Such algorithms are typically based on linearity, independence or normality assumptions, which must be verified on a case-by-case basis. Clustering or some form of statistical classification is typically employed to perform an initial organization of the high-throughput mRNA expression values derived from microarray experiments, in particular to select sets of genes as candidates for network nodes. The question then arises: how can the clustering or classification results be connected to the underlying biology? Such results can be useful for pattern classification – for example, to classify subtypes of cancer, or to predict differential responses to a drug (pharmacogenomics). But to understand the relationships between the genes, that is, to more precisely define the influence of each gene on the others, the scientist typically attempts to reconstruct the transcriptional regulatory network.
Gene co-expression networks
A gene co-expression network is an undirected graph, where each node corresponds to a gene, and a pair of nodes is connected with an edge if there is a significant co-expression relationship between them.
Signal transduction
Signal transduction networks use proteins for the nodes and directed edges to represent interaction in which the biochemical conformation of the child is modified by the action of the parent (e.g. mediated by phosphorylation, ubiquitylation, methylation, etc.). Primary input into the inference algorithm would be data from a set of experiments measuring protein activation / inactivation (e.g., phosphorylation / dephosphorylation) across a set of proteins. Inference for such signalling networks is complicated by the fact that total concentrations of signalling proteins will fluctuate over time due to transcriptional and translational regulation. Such variation can lead to statistical confounding. Accordingly, more sophisticated statistical techniques must be applied to analyse such datasets.(very important in the biology of cancer)
Metabolic network
Metabolite networks use nodes to represent chemical reactions and directed edges for the metabolic pathways and regulatory interactions that guide these reactions. Primary input into an algorithm would be data from a set of experiments measuring metabolite levels.
Protein-protein interaction networks
One of the most intensely studied networks in biology, Protein-protein interaction networks (PINs) visualize the physical relationships between proteins inside a cell. in a PIN, proteins are the nodes and their interactions are the undirected edges. PINs can be discovered with a variety of methods including; Two-hybrid Screening, in vitro: co-immunoprecipitation, blue native gel electrophoresis, and more.
Neuronal network
A neuronal network is composed to represent neurons with each node and synapses for the edges, which are typically weighted and directed. the weights of edges are usually adjusted by the activation of connected nodes. The network is usually organized into input layers, hidden layers, and output layers.
Food webs
A food web is an interconnected directional graph of what eats what in an ecosystem. The members of the ecosystem are the nodes and if a member eats another member then there is a directed edge between those 2 nodes.
Within species and between species interaction networks
These networks are defined by a set of pairwise interactions between and within a species that is used to understand the structure and function of larger ecological networks. By using network analysis we can discover and understand how these interactions link together within the system's network. It also allows us to quantify associations between individuals, which makes it possible to infer details about the network as a whole at the species and/or population level.
DNA-DNA chromatin networks
DNA-DNA chromatin networks are used to clarify the activation or suppression of genes via the relative location of strands of chromatin. These interactions can be understood by analyzing commonalities amongst different loci, a fixed position on a chromosome where a particular gene or genetic marker is located. Network analysis can provide vital support in understanding relationships among different areas of the genome.
Gene regulatory networks
A gene regulatory network is a set of molecular regulators that interact with each other and with other substances in the cell. The regulator can be DNA, RNA, protein and complexes of these. Gene regulatory networks can be modeled in numerous ways including; Coupled ordinary differential equations, Boolean networks, Continuous networks, and Stochastic gene networks.
Network attributes
Data sources
The initial data used to make the inference can have a huge impact on the accuracy of the final inference. Network data is inherently noisy and incomplete sometimes due to evidence from multiple sources that don't overlap or contradictory data. Data can be sourced in multiple ways to include manual curation of scientific literature put into databases, High-throughput datasets, computational predictions, and text mining of old scholarly articles from before the digital era.
Network diameter
A network's diameter is the maximum number of steps separating any two nodes and can be used to determine the How connected a graph is, in topology analysis, and clustering analysis.
Transitivity
The transitivity or clustering coefficient of a network is a measure of the tendency of the nodes to cluster together. High transitivity means that the network contains communities or groups of nodes that are densely connected internally. In biological networks, finding these communities is very important, because they can reflect functional modules and protein complexes
The uncertainty about the connectivity may distort the results and should be taken into account when the transitivity and other topological descriptors are computed for inferred networks.
Network confidence
Network confidence is a way to measure how sure one can be that the network represents a real biological interaction. We can do this via contextual biological information, counting the number of times an interaction is reported in the literature, or group different strategies into a single score. the MIscore method for assessing the reliability of protein-protein interaction data is based on the use of standards. MIscore gives an estimation of confidence weighting on all available evidence for an interacting pair of proteins. The method allows weighting of evidence provided by different sources, provided the data is represented following the standards created by the IMEx consortium. The weights are number of publications, detection method, interaction evidence type.
Closeness
Closeness, a.k.a. closeness centrality, is a measure of centrality in a network and is calculated as the reciprocal of the sum of the length of the shortest paths between the node and all other nodes in the graph. This measure can be used to make inferences in all graph types and analysis methods.
Betweenness
Betweeness, a.k.a. betweenness centrality, is a measure of centrality in a graph based on shortest paths. The betweenness for each node is the number of these shortest paths that pass through the node.
Network analysis methods
For our purposes, network analysis is closely related to graph theory. By measuring the attributes in the previous section we can utilize many different techniques to create accurate inferences based on biological data.
Topology analysis
Topology Analysis analyzes the topology of a network to identify relevant participates and substructures that may be of biological significance. The term encompasses an entire class of techniques such as network motif search, centrality analysis, topological clustering, and shortest paths. These are but a few examples, each of these techniques use the general idea of focusing on the topology of a network to make inferences.
Network Motif Search
A motif is defined as a frequent and unique sub-graph. By counting all the possible instances, listing all patterns, and testing isomorphisms we can derive crucial information about a network. They're suggested to be the basic building blocks complex biological networks. The computational research has focused on improving existing motif detection tools to assist the biological investigations and allow larger networks to be analyzed. Several different algorithms have been provided so far, which are elaborated in the next section.
Centrality Analysis
Centrality gives an estimation on how important a node or edge is for the connectivity or the information flow of the network. It is a useful parameter in signalling networks and it is often used when trying to find drug targets. It is most commonly used in PINs to determine important proteins and their functions. Centrality can be measured in different ways depending on the graph and the question that needs answering, they include the degree of nodes or the number of connected edges to a node, global centrality measures, or via random walks which is used by the Google PageRank algorithm to assign weight to each webpage.
The centrality measures may be affected by errors due to noise on measurement and other causes. Therefore, the topological descriptors should be defined as random variable with the associated probability distribution encoding the uncertainty on their value.
Topological Clustering
Topological Clustering or Topological Data Analysis (TDA) provides a general framework to analyze high dimensional, incomplete, and noisy data in a way that reduces dimensional and gives a robustness to noise. The idea that is that the shape of data sets contains relevant information. When this information is a homology group there is a mathematical interpretation that assumes that features that persist for a wide range of parameters are "true" features and features persisting for only a narrow range of parameters are noise, although the theoretical justification for this is unclear. This technique has been used for progression analysis of disease, viral evolution, propagation of contagions on networks, bacteria classification using molecular spectroscopy, and much more in and outside of biology.
Shortest paths
The shortest path problem is a common problem in graph theory that tries to find the path between two vertices (or nodes) in a graph such that the sum of the weights of its constituent edges is minimized. This method can be used to determine the network diameter or redundancy in a network. there are many algorithms for this including Dijkstra's algorithm, Bellman–Ford algorithm, and the Floyd–Warshall algorithm just to name a few.
Clustering analysis
Cluster analysis groups objects (nodes) such that objects in the same cluster are more similar to each other than to those in other clusters. This can be used to perform pattern recognition, image analysis, information retrieval, statistical data analysis, and so much more. It has applications in Plant and animal ecology, Sequence analysis, antimicrobial activity analysis, and many other fields. Cluster analysis algorithms come in many forms as well such as Hierarchical clustering, k-means clustering, Distribution-based clustering, Density-based clustering, and Grid-based clustering.
Annotation enrichment analysis
Gene annotation databases are commonly used to evaluate the functional properties of experimentally derived gene sets. Annotation Enrichment Analysis (AEA) is used to overcome biases from overlap statistical methods used to assess these associations. It does this by using gene/protein annotations to infer which annotations are over-represented in a list of genes/proteins taken from a network.
Network analysis tools
See also
Cytoscape tool
Bayesian probability
Network medicine
References
Bioinformatics
Systems biology
Inference | Biological network inference | [
"Engineering",
"Biology"
] | 2,972 | [
"Bioinformatics",
"Biological engineering",
"Systems biology"
] |
10,571,012 | https://en.wikipedia.org/wiki/Localizer%20type%20directional%20aid | A localizer type directional aid (LDA) or Instrument Guidance System (IGS) is a type of localizer-based instrument approach to an airport. It is used in places where, due to terrain and other factors, the localizer antenna array is not aligned with the runway it serves. In these cases, the localizer antenna array may be offset (i.e. pointed or aimed) in such a way that the approach course it projects no longer lies along the extended runway centerline (which is the norm for non-offset and non-LDA localizer systems). If the angle of offset is three degrees or less, the facility is classified as an offset localizer. If the offset angle is greater than three degrees, the facility is classified as a localizer-type directional aid (LDA). Straight-in approaches may be published if the offset angle does not exceed 30 degrees. Only circling minima are published for offset angles greater than 30 degrees. As a "directional aid", and only a Category I (CAT I) approach, rather than a full-fledged instrument landing system (ILS), the LDA is more commonly used to help the pilot safely reach a point near the runway environs, where he or she hopefully can see the runway, at which point he or she will proceed and land visually, as opposed to (for example) full Category III (CAT III) ILS systems that allow a pilot to fly, without visual references, very close to the runway surface (usually about 100 ft) depending on the exact equipment in the aircraft and on the ground.
An LDA uses exactly the same equipment to create the course as a standard localizer used in ILS. An LDA approach also is designed with a normal course width, which is typically 3 to 6 degrees. (At each "edge-of-course", commonly 1.5 or 3 degrees left and right of course, the transmitted signal is created in such a way as to ensure full-scale CDI needle deflection at and beyond these edges, so the pilot will never falsely believe they are intercepting the course outside of the actual course area. The area between these full-scale needle deflections is what defines the course width.) An LDA approach (considered a non-precision approach) may have one or more marker beacons, perhaps a DME, and in rare instances a glide slope, just as other precision approaches have, such as ILS approaches.
If the offset is not greater than 30 degrees, straight-in approach minima may be published; circling minima only are published when offset exceeds 30 degrees.
List of LDA approaches in the United States
The following 25 LDA approaches are available in the United States (as of November 2023):
PAJN, LDA X RWY 08, Juneau International Airport, Juneau, AK
PAPG, LDA/DME-D, Petersburg James A. Johnson Airport, Petersburg, AK
PASI, LDA/DME RWY 11, Sitka Rocky Gutierrez Airport, Sitka, AK
PAVD, LDA/DME-H, Valdez Pioneer Field, Valdez, AK
PAWG, LDA/DME-C and LDA/DME-D, Wrangell Airport, Wrangell, AK
KFYV, LDA/DME RWY 34, Drake Field, Fayetteville, AR
KBIH, LDA/DME RWY 17, Eastern Sierra Regional Airport, Bishop, CA
KCCR, LDA RWY 19R, Buchanan Field, Concord, CA
KSNA, LDA/DME RWY 20R, John Wayne Airport–Orange County, Santa Ana, CA
KTVL, LDA RWY 18, Lake Tahoe Airport, South Lake Tahoe, CA
KVNY, LDA-C, Van Nuys Airport, Van Nuys, CA
KEGE, LDA/DME RWY 25, Eagle County Regional Airport, Eagle, CO
KGJT, LDA/DME RWY 29, Grand Junction Regional Airport, Grand Junction, CO
KHFD, LDA RWY 02, Hartford–Brainard Airport, Hartford, CT
KDCA, LDA Y RWY 19, Ronald Reagan Washington National Airport, Washington, DC (No longer in use)
KDCA, LDA Z RWY 19, Ronald Reagan Washington National Airport, Washington, DC
PHNL, LDA/DME RWY 26L, Honolulu International Airport, Honolulu, HI
KEKO, LDA/DME RWY 24, Elko Regional Airport, Elko, NV
KDLS, LDA/DME RWY 25 and COPTER LDA/DME RWY 25, Columbia Gorge Regional Airport, The Dalles, OR
KAMA, LDA/DME RWY 22, Rick Husband Amarillo International Airport, Amarillo, TX
KSGU, LDA/DME RWY 19, St. George Regional Airport, Saint George, UT
KSLC, LDA/DME RWY 35, Salt Lake City International Airport, Salt Lake City, UT
KROA, LDA Y RWY 06 and LDA Z RWY 06, Roanoke Regional Airport, Roanoke, VA
KEKN, LDA-C, Elkins–Randolph County–Jennings Randolph Field, Elkins, WV
W99, LDA/DME-B, Grant County Airport, Petersburg, WV
List of LDA approaches outside the United States
BIAR, LDA RWY 01, offset 26°, Akureyri Airport, Akureyri, Iceland
EKVG, LDA RWYs 12 & 30, offset 14° & 2° respectively, Vágar Airport, Vágar, Faroe Islands
LQMO, IGS RWY 33, offset 21°, Mostar Airport, Mostar, Bosnia and Herzegovina
LYTV, LDA RWY 32, offset 20°, Tivat Airport, Tivat, Montenegro
LSGS, IGS RWY 25, offset 6.5° Sion Airport, Sion, Switzerland
NVVV, LDA RWY 11, offset 26°, Bauerfield International Airport, Port Vila, Vanuatu
RJTT, LDA RWYs 22 & 23, offset 55° & 48° respectively, Tokyo International Airport, Tokyo, Japan
RCBS, LDA RWY 24, offset 15°, Kinmen Airport, Kinmen, Taiwan
RCFN, LDA RWY 04, offset 15°, Taitung Fongnian Airport, Taitung, Taiwan
RCSS, LDA RWY 28, offset 8°, Taipei Songshan Airport, Taipei, Taiwan
List of decommissioned LDA approaches
This list is incomplete
(X)VHHH, IGS RWY 13, Kai Tak Airport, Kowloon Bay, Hong Kong
LLBG, LDA RWY 30, offset 11°, Ben Gurion Airport, Tel Aviv, Israel
KVUO, LDA-A, Pearson Field, Vancouver, WA
See also
Instrument approach
Instrument landing system
Simplified directional facility
Localizer
References
Aeronautical Information Manual (AIM), published in the USA by Federal Aviation Administration
Aircraft instruments
Radio navigation | Localizer type directional aid | [
"Technology",
"Engineering"
] | 1,478 | [
"Aircraft instruments",
"Measuring instruments"
] |
10,571,090 | https://en.wikipedia.org/wiki/Tailstock | A tailstock, also known as a foot stock, is a device often used as part of an engineering lathe, wood-turning lathe, or used in conjunction with a rotary table on a milling machine.
It is usually used to apply support to the longitudinal rotary axis of a workpiece being machined. A lathe center is mounted in the tailstock, and inserted against the sides of a hole in the center of the workpiece. A Tailstock is particularly useful when the workpiece is relatively long and slender. Failing to use a tailstock can cause "chatter," where the workpiece bends excessively while being cut. This bending can also cause finished parts to exhibit an unintended taper where the unsupported end of the part is larger in diameter compared to the end supported by the headstock.
It is also used on a lathe to hold drilling or reaming tools for machining a hole in the work piece. Unlike drilling with a drill press or a milling machine, the tool is stationary while the workpiece rotates. Holes can only be cut along the axis that the workpiece is set to spin.
Usually, the entire tailstock is moved to the approximate position that it will be needed by manually sliding it along its ways. There, it is locked in place and the tool mounted to it is moved with a leadscrew to the exact position where it is needed. When a cutting tool such as a drill bit or reamer is used, the feed is done with this leadscrew. The extendible portion of the tailstock is called the barrel, and usually has a Morse taper mount in the end of it to secure the drill or reamer. If the work is heavy, the drill may be further secured from turning with a lathe dog as shown in the photo.
References
Machine tools | Tailstock | [
"Engineering"
] | 378 | [
"Machine tools",
"Industrial machinery"
] |
10,573,305 | https://en.wikipedia.org/wiki/K-mer | In bioinformatics, k-mers are substrings of length contained within a biological sequence. Primarily used within the context of computational genomics and sequence analysis, in which k-mers are composed of nucleotides (i.e. A, T, G, and C), k-mers are capitalized upon to assemble DNA sequences, improve heterologous gene expression, identify species in metagenomic samples, and create attenuated vaccines. Usually, the term k-mer refers to all of a sequence's subsequences of length , such that the sequence AGAT would have four monomers (A, G, A, and T), three 2-mers (AG, GA, AT), two 3-mers (AGA and GAT) and one 4-mer (AGAT). More generally, a sequence of length will have k-mers and total possible k-mers, where is number of possible monomers (e.g. four in the case of DNA).
Introduction
k-mers are simply length subsequences. For example, all the possible k-mers of a DNA sequence are shown below:
A method of visualizing k-mers, the k-mer spectrum, shows the multiplicity of each k-mer in a sequence versus the number of k-mers with that multiplicity. The number of modes in a k-mer spectrum for a species's genome varies, with most species having a unimodal distribution. However, all mammals have a multimodal distribution. The number of modes within a k-mer spectrum can vary between regions of genomes as well: humans have unimodal k-mer spectra in 5' UTRs and exons but multimodal spectra in 3' UTRs and introns.
Forces affecting DNA k-mer frequency
The frequency of k-mer usage is affected by numerous forces, working at multiple levels, which are often in conflict. It is important to note that k-mers for higher values of k are affected by the forces affecting lower values of k as well. For example, if the 1-mer A does not occur in a sequence, none of the 2-mers containing A (AA, AT, AG, and AC) will occur either, thereby linking the effects of the different forces.
k = 1
When k = 1, there are four DNA k-mers, i.e., A, T, G, and C. At the molecular level, there are three hydrogen bonds between G and C, whereas there are only two between A and T. GC bonds, as a result of the extra hydrogen bond (and stronger stacking interactions), are more thermally stable than AT bonds. Mammals and birds have a higher ratio of Gs and Cs to As and Ts (GC-content), which led to the hypothesis that thermal stability was a driving factor of GC-content variation. However, while promising, this hypothesis did not hold up under scrutiny: analysis among a variety of prokaryotes showed no evidence of GC-content correlating with temperature as the thermal adaptation hypothesis would predict. Indeed, if natural selection were to be the driving force behind GC-content variation, that would require that single nucleotide changes, which are often silent, to alter the fitness of an organism.
Rather, current evidence suggests that GC‐biased gene conversion (gBGC) is a driving factor behind variation in GC content. gBGC is a process that occurs during recombination which replaces As and Ts with Gs and Cs. This process, though distinct from natural selection, can nevertheless exert selective pressure on DNA biased towards GC replacements being fixed in the genome. gBGC can therefore be seen as an "impostor" of natural selection. As would be expected, GC content is greater at sites experiencing greater recombination. Furthermore, organisms with higher rates of recombination exhibit higher GC content, in keeping with the gBGC hypothesis's predicted effects. Interestingly, gBGC does not appear to be limited to eukaryotes. Asexual organisms such as bacteria and archaea also experience recombination by means of gene conversion, a process of homologous sequence replacement resulting in multiple identical sequences throughout the genome. That recombination is able to drive up GC content in all domains of life suggests that gBGC is universally conserved. Whether gBGC is a (mostly) neutral byproduct of the molecular machinery of life or is itself under selection remains to be determined. The exact mechanism and evolutionary advantage or disadvantage of gBGC is currently unknown.
k = 2
Despite the comparatively large body of literature discussing GC-content biases, relatively little has been written about dinucleotide biases. What is known is that these dinucleotide biases are relatively constant throughout the genome, unlike GC-content, which, as seen above, can vary considerably. This is an important insight that must not be overlooked. If dinucleotide bias were subject to pressures resulting from translation, then there would be differing patterns of dinucleotide bias in coding and noncoding regions driven by some dinucelotides' reduced translational efficiency. Because there is not, it can therefore be inferred that the forces modulating dinucleotide bias are independent of translation. Further evidence against translational pressures affecting dinucleotide bias is the fact that the dinucleotide biases of viruses, which rely heavily on translational efficiency, are shaped by their viral family more than by their hosts, whose translational machinery the viruses hijack.
Counter to gBGC's increasing GC-content is CG suppression, which reduces the frequency of CG 2-mers due to deamination of methylated CG dinucleotides, resulting in substitutions of CGs with TGs, thereby reducing the GC-content. This interaction highlights the interrelationship between the forces affecting k-mers for varying values of k.
One interesting fact about dinucleotide bias is that it can serve as a "distance" measurement between phylogenetically similar genomes. The genomes of pairs of organisms that are closely related share more similar dinucleotide biases than between pairs of more distantly related organisms.
k = 3
There are twenty natural amino acids that are used to build the proteins that DNA encodes. However, there are only four nucleotides. Therefore, there cannot be a one-to-one correspondence between nucleotides and amino acids. Similarly, there are 16 2-mers, which is also not enough to unambiguously represent every amino acid. However, there are 64 distinct 3-mers in DNA, which is enough to uniquely represent each amino acid. These non-overlapping 3-mers are called codons. While each codon only maps to one amino acid, each amino acid can be represented by multiple codons. Thus, the same amino acid sequence can have multiple DNA representations. Interestingly, each codon for an amino acid is not used in equal proportions. This is called codon-usage bias (CUB). When k = 3, a distinction must be made between true 3-mer frequency and CUB. For example, the sequence ATGGCA has four 3-mer words within it (ATG, TGG, GGC, and GCA) while only containing two codons (ATG and GCA). However, CUB is a major driving factor of 3-mer usage bias (accounting for up to ⅓ of it, since ⅓ of the k-mers in a coding region are codons) and will be the main focus of this section.
The exact cause of variation between the frequencies of various codons is not fully understood. It is known that codon preference is correlated with tRNA abundances, with codons matching more abundant tRNAs being correspondingly more frequent and that more highly expressed proteins exhibit greater CUB. This suggests that selection for translational efficiency or accuracy is the driving force behind CUB variation.
k = 4
Similar to the effect seen in dinucleotide bias, the tetranucleotide biases of phylogenetically similar organisms are more similar than between less closely related organisms. The exact cause of variation in tetranucleotide bias is not well understood, but it has been hypothesized to be the result of the maintenance of genetic stability at the molecular level.
Applications
The frequency of a set of k-mers in a species's genome, in a genomic region, or in a class of sequences can be used as a "signature" of the underlying sequence. Comparing these frequencies is computationally easier than sequence alignment and is an important method in alignment-free sequence analysis. It can also be used as a first stage analysis before an alignment.
Sequence assembly
In sequence assembly, k-mers are used during the construction of De Bruijn graphs. In order to create a De Bruijn Graph, the k-mers stored in each edge with length must overlap another string in another edge by in order to create a vertex. Reads generated from next-generation sequencing will typically have different read lengths being generated. For example, reads by Illumina's sequencing technology capture reads of 100-mers. However, the problem with the sequencing is that only small fractions out of all the possible 100-mers that are present in the genome are actually generated. This is due to read errors, but more importantly, just simple coverage holes that occur during sequencing. The problem is that these small fractions of the possible k-mers violate the key assumption of De Bruijn graphs that all the k-mer reads must overlap its adjoining k-mer in the genome by (which cannot occur when all the possible k-mers are not present).
The solution to this problem is to break these k-mer sized reads into smaller k-mers, such that the resulting smaller k-mers will represent all the possible k-mers of that smaller size that are present in the genome. Furthermore, splitting the k-mers into smaller sizes also helps alleviate the problem of different initial read lengths. In this example, the five reads do not account for all the possible 7-mers of the genome, and as such, a De Bruijn graph cannot be created. But, when they are split into 4-mers, the resultant subsequences are enough to reconstruct the genome using a De Bruijn graph.
Beyond being used directly for sequence assembly, k-mers can also be used to detect genome mis-assembly by identifying k-mers that are overrepresented which suggest the presence of repeated DNA sequences that have been combined. In addition, k-mers are also used to detect bacterial contamination during eukaryotic genome assembly, an approach borrowed from the field of metagenomics.
Choice of k-mer size
The choice of the k-mer size has many different effects on the sequence assembly. These effects vary greatly between lower sized and larger sized k-mers. Therefore, an understanding of the different k-mer sizes must be achieved in order to choose a suitable size that balances the effects. The effects of the sizes are outlined below.
Lower k-mer sizes
A lower k-mer size will decrease the amount of edges stored in the graph, and as such, will help decrease the amount of space required to store DNA sequence.
Having smaller sizes will increase the chance for all the k-mers to overlap, and as such, have the required subsequences in order to construct the De Bruijn graph.
However, by having smaller sized k-mers, you also risk having many vertices in the graph leading into a single k-mer. Therefore, this will make the reconstruction of the genome more difficult as there is a higher level of path ambiguities due to the larger amount of vertices that will need to be traversed.
Information is lost as the k-mers become smaller.E.g. The possibility of AGTCGTAGATGCTG is lower than ACGT, and as such, holds a greater amount of information (refer to entropy (information theory) for more information).
Smaller k-mers also have the problem of not being able to resolve areas in the DNA where small microsatellites or repeats occur. This is because smaller k-mers will tend to sit entirely within the repeat region and is therefore hard to determine the amount of repetition that has actually taken place.
E.g. For the subsequence ATGTGTGTGTGTGTACG, the amount of repetitions of TG will be lost if a k-mer size less than 16 is chosen. This is because most of the k-mers will sit in the repeated region and may just be discarded as repeats of the same k-mer instead of referring the amount of repeats.
Higher k-mer sizes
Having larger sized k-mers will increase the number of edges in the graph, which in turn, will increase the amount of memory needed to store the DNA sequence.
By increasing the size of the k-mers, the number of vertices will also decrease. This will help with the construction of the genome as there will be fewer paths to traverse in the graph.
Larger k-mers also run a higher risk of not having outward vertices from every k-mer. This is due to larger k-mers increasing the risk that it will not overlap with another k-mer by . Therefore, this can lead to disjoints in the reads, and as such, can lead to a higher amount of smaller contigs.
Larger k-mer sizes help alleviate the problem of small repeat regions. This is due to the fact that the k-mer will contain a balance of the repeat region and the adjoining DNA sequences (given it are a large enough size) that can help to resolve the amount of repetition in that particular area.
Genetics and Genomics
With respect to disease, dinucleotide bias has been applied to the detection of genetic islands associated with pathogenicity. Prior work has also shown that tetranucleotide biases are able to effectively detect horizontal gene transfer in both prokaryotes and eukaryotes.
Another application of k-mers is in genomics-based taxonomy. For example, GC-content has been used to distinguish between species of Erwinia with moderate success. Similar to the direct use of GC-content for taxonomic purposes is the use of Tm, the melting temperature of DNA. Because GC bonds are more thermally stable, sequences with higher GC content exhibit a higher Tm. In 1987, the Ad Hoc Committee on Reconciliation of Approaches to Bacterial Systematics proposed the use of ΔTm as factor in determining species boundaries as part of the phylogenetic species concept, though this proposal does not appear to have gained traction within the scientific community.
Other applications within genetics and genomics include:
RNA isoform quantification from RNA-seq data
Classification of human mitochondrial haplogroup
Detection of recombination sites in genomes
Estimation of genome size using k-mer frequency vs k-mer depth
Characterization of CpG islands by flanking regions
De novo detection of repeated sequence such as transposable element
DNA barcoding of species.
Characterization of protein-binding sequence motifs
Identification of mutation or polymorphism using next generation sequencing data
Metagenomics
k-mer frequency and spectrum variation is heavily used in metagenomics for both analysis and binning. In binning, the challenge is to separate sequencing reads into "bins" of reads for each organism (or operational taxonomic unit), which will then be assembled. TETRA is a notable tool that takes metagenomic samples and bins them into organisms based on their tetranucleotide (k = 4) frequencies. Other tools that similarly rely on k-mer frequency for metagenomic binning are CompostBin (k = 6), PCAHIER, PhyloPythia (5 ≤ k ≤ 6), CLARK (k ≥ 20), and TACOA (2 ≤ k ≤ 6). Recent developments have also applied deep learning to metagenomic binning using k-mers.
Other applications within metagenomics include:
Recovery of reading frames from raw reads
Estimation of species abundance in metagenomic samples
Determination of which species are present in samples
Identification of biomarkers for diseases from samples
Biotechnology
Modifying k-mer frequencies in DNA sequences has been used extensively in biotechnological applications to control translational efficiency. Specifically, it has been used to both up- and down-regulate protein production rates.
With respect to increasing protein production, reducing unfavorable dinucleotide frequency has been used yield higher rates of protein synthesis. In addition, codon usage bias has been modified to create synonymous sequences with greater protein expression rates. Similarly, codon pair optimization, a combination of dinucelotide and codon optimization, has also been successfully used to increase expression.
The most studied application of k-mers for decreasing translational efficiency is codon-pair manipulation for attenuating viruses in order to create vaccines. Researchers were able to recode dengue virus, the virus that causes dengue fever, such that its codon-pair bias was more different to mammalian codon-usage preference than the wild type. Though containing an identical amino-acid sequence, the recoded virus demonstrated significantly weakened pathogenicity while eliciting a strong immune response. This approach has also been used effectively to create an influenza vaccine as well a vaccine for Marek's disease herpesvirus (MDV). Notably, the codon-pair bias manipulation employed to attenuate MDV did not effectively reduce the oncogenicity of the virus, highlighting a potential weakness in the biotechnology applications of this approach. To date, no codon-pair deoptimized vaccine has been approved for use.
Two later articles help explain the actual mechanism underlying codon-pair deoptimization: codon-pair bias is the result of dinucleotide bias. By studying viruses and their hosts, both sets of authors were able to conclude that the molecular mechanism that results in the attenuation of viruses is an increase in dinucleotides poorly suited for translation.
GC-content, due to its effect on DNA melting point, is used to predict annealing temperature in PCR, another important biotechnology tool.
Implementation
Pseudocode
Determining the possible k-mers of a read can be done by simply cycling over the string length by one and taking out each substring of length . The pseudocode to achieve this is as follows:
procedure k-mers(string seq, integer k) is
L ←
arr ← new array of L − k + 1 empty strings
// iterate over the number of k-mers in seq,
// storing the nth k-mer in the output array
for n ← 0 to L − k + 1 exclusive do
arr[n] ← subsequence of seq from letter n inclusive to letter n + k exclusive
return arr
In bioinformatics pipelines
Because the number of k-mers grows exponentially for values of k, counting k-mers for large values of k (usually >10) is a computationally difficult task. While simple implementations such as the above pseudocode work for small values of k, they need to be adapted for high-throughput applications or when k is large. To solve this problem, various tools have been developed:
Jellyfish uses a multithreaded, lock-free hash table for k-mer counting and has Python, Ruby, and Perl bindings
KMC is a tool for k-mer counting that uses a multidisk architecture for optimized speed
Gerbil uses a hash table approach but with added support for GPU acceleration
K-mer Analysis Toolkit (KAT) uses a modified version of Jellyfish to analyze k-mer counts
See also
Oligonucleotide
Genomic signature
References
Some of the content in this article was copied from K-mer at the PLOS wiki, which is available under a Creative Commons Attribution 2.5 Generic (CC BY 2.5) license.
External links
bioXriv:k-mer
arXiv: k-mer
Nucleic acids
Applied mathematics
Biophysics
Computational biology
Bioinformatics
Amino acids | K-mer | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering",
"Biology"
] | 4,230 | [
"Biomolecules by chemical classification",
"Biological engineering",
"Applied and interdisciplinary physics",
"Applied mathematics",
"Amino acids",
"Bioinformatics",
"Biophysics",
"Computational biology",
"Nucleic acids"
] |
10,573,426 | https://en.wikipedia.org/wiki/Radio%20beacon | In navigation, a radio beacon or radiobeacon is a kind of beacon, a device that marks a fixed location and allows direction-finding equipment to find relative bearing. But instead of employing visible light, radio beacons transmit electromagnetic radiation in the radio wave band. They are used for direction-finding systems on ships, aircraft and vehicles.
Radio beacons transmit a continuous or periodic radio signal with limited information (for example, its identification or location) on a specified radio frequency. Occasionally, the beacon's transmission includes other information, such as telemetric or meteorological data.
Radio beacons have many applications, including air and sea navigation, propagation research, robotic mapping, radio-frequency identification (RFID), near-field communication (NFC) and indoor navigation, as with real-time locating systems (RTLS) like Syledis or simultaneous localization and mapping (SLAM).
Types
Radio-navigation beacons
The most basic radio-navigational aid used in aviation is the non-directional beacon or NDB. It is a simple low- and medium-frequency transmitter used to locate airway intersections and airports and to conduct instrument approaches, with the use of a radio direction finder located on the aircraft. The aviation NDBs, especially the ones marking airway intersections, are gradually being decommissioned and replaced with other navigational aids based on newer technologies. Due to relatively low purchase, maintenance and calibration cost, NDBs are still used to mark locations of smaller aerodromes and important helicopter landing sites.
Marine beacons, based on the same technology and installed in coastal areas, have also been used by ships at sea. Most of them, especially in the Western world, are no longer in service, while some have been converted to telemetry transmitters for differential GPS.
Other than dedicated radio beacons, any AM, VHF, or UHF radio station at a known location can be used as a beacon with direction-finding equipment. However stations, which are part of a single-frequency network should not be used as in this case the direction of the minimum or the maximum can be different from the direction to the transmitter site.
ILS marker beacons
A marker beacon is a specialized beacon used in aviation, in conjunction with an instrument landing system (ILS), to give pilots a means to determine distance to the runway. Marker beacons transmit on the dedicated frequency of 75 MHz. This type of beacon is slowly being phased out, and most new ILS installations have no marker beacons.
Amateur radio propagation beacons
An amateur radio propagation beacon is specifically used to study the propagation of radio signals. Nearly all of them are part of the amateur radio service.
Single-letter high-frequency beacons
A group of radio beacons with single-letter identifiers ("C", "D", "M", "S", "P", etc.) transmitting in Morse code have been regularly reported on various high frequencies. There is no official information available about these transmitters, and they are not registered with the International Telecommunication Union. Some investigators suggest that some of these so-called "cluster beacons" are actually radio propagation beacons for naval use.
Space and satellite radio beacons
Beacons are also used in both geostationary and inclined-orbit satellites. Any satellite will emit one or more beacons (normally on a fixed frequency) whose purpose is twofold; as well as containing modulated station-keeping information (telemetry), the beacon locates the satellite (determines its azimuth and elevation) in the sky.
A beacon was left on the Moon by crew of Apollo 17, the last Apollo mission, transmitting FSK telemetry on 2276.0 MHz
Driftnet buoy radio beacons
Driftnet radio buoys are extensively used by fishing boats operating in open seas and oceans. They are useful for collecting long fishing lines or fishing nets, with the assistance of a radio direction finder. According to product information released by manufacturer Kato Electronics Co, Ltd., these buoys transmit on 1600–2850 kHz with a power of 4-15 W.
Some types of driftnet buoys, called "SelCall buoys", answer only when they are called by their own ships. Using this technique the buoy prevents nets and fishing gears from being carried away by other ships, while the battery power consumption remains low.
Distress radio beacons
Distress radio beacons, also collectively known as distress beacons, emergency beacons, or simply beacons, are those tracking transmitters that operate as part of the international Cospas-Sarsat Search and Rescue satellite system. When activated, these beacons send out a distress signal that, when detected by non-geostationary satellites, can be located by triangulation. In the case of 406 MHz beacons, which transmit digital signals, the beacons can be uniquely identified almost instantly (via GEOSAR), and a GPS position can be encoded into the signal (thus providing both instantaneous identification and position). Distress signals from the beacons are homed by search and rescue (SAR) aircraft and ground search parties, who can in turn come to the aid of the concerned boat, aircraft or persons.
There are three kinds of distress radio beacons:
EPIRBs (emergency position-indicating radio beacons) signal maritime distress
ELTs (emergency locator transmitters) signal aircraft distress
PLBs (personal locator beacons) are for personal use and are intended to indicate a person in distress who is away from normal emergency response capabilities (i.e. 911)
The basic purpose of distress radio beacons is to rescue people within the so-called "golden day" (the first 24 hours following a traumatic event), when the majority of survivors can still be saved.
Wi-Fi beacons
In the field of Wi-Fi (wireless local area networks using the IEEE 802.11b and 802.11g specification), the term beacon signifies a specific data transmission from the wireless access point (AP), which carries the SSID, the channel number and security protocols such as Wired Equivalent Privacy (WEP) or Wi-Fi Protected Access (WPA). This transmission does not contain the link layer address of another Wi-Fi device, therefore it can be received by any LAN client.
AX.25 packet radio beacons
Stations participating in packet radio networks based on the AX.25 link layer protocol also use beacon transmissions to identify themselves and broadcast brief information about operational status. The beacon transmissions use special UI or Unnumbered Information frames, which are not part of a connection and can be displayed by any station. Beacons in traditional AX.25 amateur packet radio networks contain free format information text, readable by human operators.
This mode of AX.25 operation, using a formal machine-readable beacon text specification developed by Bob Bruninga, WB4APR, became the basis of the APRS networks.
See also
iBeacon
Non-directional beacon
Marker beacon
Letter beacon
Radio direction finder
Direction finding
Bluetooth and Wi-Fi
Mobile phone tracking
Robotic mapping
Rebecca/Eureka transponding radar
References
Further reading
An Accurate and Cheap Navigation System for Robots , using sonar beacons.
Minimum-resource distributed navigation and mapping , using IR beacon.
Five steps to creating a Wireless Network
Community Emergency Response Team Participant Handbook (May 1994)
Navigation
Radio frequency propagation
Beacons | Radio beacon | [
"Physics"
] | 1,501 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves"
] |
10,576,388 | https://en.wikipedia.org/wiki/Environmental%20impact-minimizing%20vehicle%20tuning | Environmental impact-minimizing vehicle tuning is the modification (or tuning) of cars to reduce energy consumption.
General tuning
Hybridization: change to a hybrid electric vehicle. One can use an aftermarket kit for the powertrain or use a hybrid adapter trailer.
Modifying key engine-selection parameters in the Battery Management System of a hybrid vehicle. Vehicles as mild hybrids have a parameter for the threshold speed on which the vehicle is to switch from electric propulsion to the internal combustion engine. Introducing a higher speed as a parameter can reduce emissions and increase fuel efficiency (although it may increase strain on the battery).
Pluginization of hybrid or electric vehicles. A plug-in hybrid electric vehicle (PHEV) is a hybrid which has additional battery capacity and the ability to be recharged from an external electrical outlet. A plug-in electric vehicle is basically the same, without an extra internal combustion engine. In addition, modifications are made to the vehicle's control software. The vehicle can be used for short trips of moderate speed without needing the internal combustion engine (ICE) component of the vehicle, thereby saving fuel costs. In this mode of operation the vehicle operates as a pure battery electric vehicle with a weight penalty (the ICE). The long range and additional power of the ICE power train is available when needed.
Electric vehicle conversion. An electric vehicle conversion is the modification of a conventional internal combustion engine (ICE) driven vehicle to battery electric propulsion, creating a battery electric vehicle. In some cases the vehicle may be built by the converter, or assembled from a kit car. In some countries, the user can choose to buy a converted vehicle of any model in the automaker dealerships only paying the cost of the batteries and motor, with no installation costs (it is called preconversion or previous conversion).
Modifying the engine to run an alternative fuel. These include natural gas conversion of gasoline-powered cars and Vegetable oil conversion of diesel cars. Cars with Diesel engines can be converted reasonably cheaply and easily to run on 100% vegetable oil. Vegetable oil is often cheaper and cleaner than petrodiesel, but local laws often levy harsh fines to users who fail to pay fuel taxes when acquiring their fuel outside regular distribution channels. Liquid nitrogen, Hydrogen fuel conversions and Ethanol conversions are other alternative fuel conversions that can be done with internal combustion engines. The first two will eliminate all vehicle emissions, while the third one will only slightly decrease emissions.
Replacing the internal combustion engine of a hybrid vehicle with a hydrogen fuel cell to make the vehicle completely emissionless; even in recharging mode.
Adding a hydrogen fuel cell to a battery electric vehicle to increase its driving range.
Adding more electric batteries to a battery electric vehicle to increase driving range. Besides placing more batteries, this operation often requires additional modification of the Battery Management System.
See also
Green vehicle
Vehicle glider
References
External links
Somender Singh's groove modification to IC engines to reduce fuel consumption, Popular Science
Green vehicles
Vehicle modification
Vehicle tuning | Environmental impact-minimizing vehicle tuning | [
"Chemistry",
"Engineering"
] | 601 | [
"Environmental mitigation",
"Environmental engineering"
] |
10,577,905 | https://en.wikipedia.org/wiki/Solution%20polymerization | Solution polymerization is a method of industrial polymerization. In this procedure, a monomer is dissolved in a non-reactive solvent that contains a catalyst or initiator.
The reaction results in a polymer which is also soluble in the chosen solvent. Heat released by the reaction is absorbed by the solvent, reducing the reaction rate. Moreover, the viscosity of the reaction mixture is reduced, preventing autoacceleration at high monomer concentrations. A decrease in viscosity of the reaction mixture by dilution also aids heat transfer, one of the major issues connected with polymer production, since most polymerizations are exothermic reactions. Once the desired conversion is reached, excess solvent must be removed to obtain the pure polymer. Accordingly, solution polymerization is primarily used in applications where the presence of a solvent is desired anyway, as is the case for varnish and adhesives. Another application of polymer solutions includes the manufacture of fibers by wet or dry spinning or plastic films.
Disadvantages of solution polymerization are decrease of monomer and initiator concentration leading to reduction of reaction rate, lower volume utilization of reactor, additional cost of the process related to solvent recycling, toxicity and other environmental impacts of most of organic solvents. One of the major disadvantages of the solution polymerization technique is that however inert the selected solvent may be, chain transfer to the solvent cannot be completely ruled out and, hence, it is difficult to get very high molecular weight product. From common solvents, especially chlorinated hydrocarbons are susceptible to chain transfer in radical polymerization. Intensity of chain transfer for different compounds may be quantified by use of chain transfer constants and the decrease of degree of polymerization may be calculated using Mayo equation.
Industrially important polymers produced by solution polymerization
Polyacrylonitrile (PAN) is manufactured by radical polymerization in dimethylformamide (DMF), dimethyl sulfoxide (DMSO), organic carbonates, sulfuric acid, nitric acid or water solutions of inorganic salts and converted to fibers.
Polyacrylic acid (PAA) and polyacrylamide are obtained by radical polymerization in water solution and used as thickeners, adhesives or flocculants.
Acrylate and methacrylate homo- and copolymers are made by radical polymerization in toluene-acetone for coating applications.
Polyethylene (HDPE, LLDPE) - some grades are made by coordination polymerization in high boiling hydrocarbone solvents (above PE solution temperature). The advantage of this process is very high propagation rate allowing fast changes of product grades.
High cis polybutadiene (BR) is manufactured by coordination polymerization in hydrocarbons.
Solution styrene-butadiene rubber (sSBR) is produced by anionic polymerization in hydrocarbons leading to rubber with better properties for making tires than emulsion polymerization type.
Polyvinyl acetate used further for polyvinyl alcohol is manufactured by radical polymerization in methanol solution.
Liquid polybutadienes are made by anionic or radical polymerization in hydrocarbon solutions.
Butyl rubber (IIR) by low temperature cationic copolymerization of isobutylene with isoprene in ethylene or methylchloride solution.
Aromatic polyamides (e.g. Kevlar and Nomex) are made by polycondensation in N-methyl-pyrrolidone and calcium chloride solution.
This process is one of two used in the production of sodium polyacrylate, a superabsorbent polymer used in disposable diapers.
See also
Dispersion polymerization
Emulsion dispersion
Superabsorbent polymer
References
Foundations of Materials Science and Engineering, fourth edition, William F. Smith & Javad Hashem
Encyclopedia of Polymer Science and Technology, J.Wiley Sons, Interscience, Publ., New York, 4th edition, 1999-2012
Polymerization reactions
fr:Procédé de polymérisation#Polymérisation en solution | Solution polymerization | [
"Chemistry",
"Materials_science"
] | 850 | [
"Polymerization reactions",
"Polymer chemistry"
] |
2,585,385 | https://en.wikipedia.org/wiki/Homologous%20temperature | Homologous temperature expresses the thermodynamic temperature of a material as a fraction of the thermodynamic temperature of its melting point (i.e. using the Kelvin scale):
For example, the homologous temperature of lead at room temperature (25 °C) is approximately 0.50 (TH = T/Tmp = 298 K/601 K = 0.50).
Significance of the homologous temperature
The homologous temperature of a substance is useful for determining the rate of steady state creep (diffusion-dependent deformation). A higher homologous temperature results in an exponentially higher rate of diffusion dependent deformation.
Additionally, for a given fixed homologous temperature, two materials with different melting points would have similar diffusion-dependent deformation behaviour. For example, solder (Tmp = 456 K) at 115 °C would have comparable mechanical properties to copper (Tmp = 1358 K) at 881 °C, because they would both be at 0.85Tmp despite being at different absolute temperatures.
In electronics applications, where circuits typically operate over a −55 °C to +125 °C range, eutectic tin-lead (Sn63) solder is working at 0.48Tmp to 0.87Tmp. The upper temperature is high relative to the melting point; from this we can deduce that solder will have limited mechanical strength (as a bulk material) and significant creep under stress. This is borne out by its comparatively low values for tensile strength, shear strength and modulus of elasticity. Copper, on the other hand, has a much higher melting point, so foils are working at only 0.16Tmp to 0.29Tmp and their properties are little affected by temperature.
References
Scales of temperature | Homologous temperature | [
"Physics",
"Chemistry",
"Mathematics"
] | 369 | [
"Thermodynamics stubs",
"Scales of temperature",
"Physical quantities",
"Quantity",
"Thermodynamics",
"Physical chemistry stubs"
] |
2,585,991 | https://en.wikipedia.org/wiki/Ehrenfest%20theorem | The Ehrenfest theorem, named after Austrian theoretical physicist Paul Ehrenfest, relates the time derivative of the expectation values of the position and momentum operators x and p to the expectation value of the force on a massive particle moving in a scalar potential ,
The Ehrenfest theorem is a special case of a more general relation between the expectation of any quantum mechanical operator and the expectation of the commutator of that operator with the Hamiltonian of the system
where is some quantum mechanical operator and is its expectation value.
It is most apparent in the Heisenberg picture of quantum mechanics, where it amounts to just the expectation value of the Heisenberg equation of motion. It provides mathematical support to the correspondence principle.
The reason is that Ehrenfest's theorem is closely related to Liouville's theorem of Hamiltonian mechanics, which involves the Poisson bracket instead of a commutator. Dirac's rule of thumb suggests that statements in quantum mechanics which contain a commutator correspond to statements in classical mechanics where the commutator is supplanted by a Poisson bracket multiplied by . This makes the operator expectation values obey corresponding classical equations of motion, provided the Hamiltonian is at most quadratic in the coordinates and momenta. Otherwise, the evolution equations still may hold approximately, provided fluctuations are small.
Relation to classical physics
Although, at first glance, it might appear that the Ehrenfest theorem is saying that the quantum mechanical expectation values obey Newton’s classical equations of motion, this is not actually the case. If the pair were to satisfy Newton's second law, the right-hand side of the second equation would have to be
which is typically not the same as
If for example, the potential is cubic, (i.e. proportional to ), then is quadratic (proportional to ). This means, in the case of Newton's second law, the right side would be in the form of , while in the Ehrenfest theorem it is in the form of . The difference between these two quantities is the square of the uncertainty in and is therefore nonzero.
An exception occurs in case when the classical equations of motion are linear, that is, when is quadratic and is linear. In that special case, and do agree. Thus, for the case of a quantum harmonic oscillator, the expected position and expected momentum do exactly follow the classical trajectories.
For general systems, if the wave function is highly concentrated around a point , then and will be almost the same, since both will be approximately equal to . In that case, the expected position and expected momentum will approximately follow the classical trajectories, at least for as long as the wave function remains localized in position.
Derivation in the Schrödinger picture
Suppose some system is presently in a quantum state . If we want to know the instantaneous time derivative of the expectation value of , that is, by definition
where we are integrating over all of space. If we apply the Schrödinger equation, we find that
By taking the complex conjugate we find
Note , because the Hamiltonian is Hermitian. Placing this into the above equation we have
Often (but not always) the operator is time-independent so that its derivative is zero and we can ignore the last term.
Derivation in the Heisenberg picture
In the Heisenberg picture, the derivation is straightforward. The Heisenberg picture moves the time dependence of the system to operators instead of state vectors. Starting with the Heisenberg equation of motion,
Ehrenfest's theorem follows simply upon projecting the Heisenberg equation onto from the right and from the left, or taking the expectation value, so
One may pull the out of the first term, since the state vectors are no longer time dependent in the Heisenberg Picture. Therefore,
General example
For the very general example of a massive particle moving in a potential, the Hamiltonian is simply
where is the position of the particle.
Suppose we wanted to know the instantaneous change in the expectation of the momentum . Using Ehrenfest's theorem, we have
since the operator commutes with itself and has no time dependence. By expanding the right-hand-side, replacing by , we get
After applying the product rule on the second term, we have
As explained in the introduction, this result does not say that the pair satisfies Newton's second law, because the right-hand side of the formula is rather than . Nevertheless, as explained in the introduction, for states that are highly localized in space, the expected position and momentum will approximately follow classical trajectories, which may be understood as an instance of the correspondence principle.
Similarly, we can obtain the instantaneous change in the position expectation value.
This result is actually in exact accord with the classical equation.
Derivation of the Schrödinger equation from the Ehrenfest theorems
It was established above that the Ehrenfest theorems are consequences of the Schrödinger equation. However, the converse is also true: the Schrödinger equation can be inferred from the Ehrenfest theorems. We begin from
Application of the product rule leads to
Here, apply Stone's theorem, using to denote the quantum generator of time translation. The next step is to show that this is the same as the Hamiltonian operator used in quantum mechanics. Stone's theorem implies
where was introduced as a normalization constant to the balance dimensionality. Since these identities must be valid for any initial state, the averaging can be dropped and the system of commutator equations for are derived:
Assuming that observables of the coordinate and momentum obey the canonical commutation relation . Setting , the commutator equations can be converted into the differential equations
whose solution is the familiar quantum Hamiltonian
Whence, the Schrödinger equation was derived from the Ehrenfest theorems by assuming the canonical commutation relation between the coordinate and momentum. If one assumes that the coordinate and momentum commute, the same computational method leads to the Koopman–von Neumann classical mechanics, which is the Hilbert space formulation of classical mechanics. Therefore, this derivation as well as the derivation of the Koopman–von Neumann mechanics, shows that the essential difference between quantum and classical mechanics reduces to the value of the commutator .
The implications of the Ehrenfest theorem for systems with classically chaotic dynamics are discussed at Scholarpedia article Ehrenfest time and chaos. Due to exponential instability of classical trajectories the Ehrenfest time, on which there is a complete correspondence between quantum and classical evolution, is shown to be logarithmically short being proportional to a logarithm of typical quantum number. For the case of integrable dynamics this time scale is much larger being proportional to a certain power of quantum number.
Notes
References
Theorems in quantum mechanics
Mathematical physics | Ehrenfest theorem | [
"Physics",
"Mathematics"
] | 1,402 | [
"Theorems in quantum mechanics",
"Equations of physics",
"Applied mathematics",
"Theoretical physics",
"Quantum mechanics",
"Theorems in mathematical physics",
"Mathematical physics",
"Physics theorems"
] |
2,586,411 | https://en.wikipedia.org/wiki/Fluid%20theory%20of%20electricity | Fluid theories of electricity are outdated theories that postulated one or more electrical fluids which were thought to be responsible for many electrical phenomena in the history of electromagnetism. The "two-fluid" theory of electricity, created by Charles François de Cisternay du Fay, postulated that electricity was the interaction between two electrical 'fluids.' An alternate simpler theory was proposed by Benjamin Franklin, called the unitary, or one-fluid, theory of electricity. This theory claimed that electricity was really one fluid, which could be present in excess, or absent from a body, thus explaining its electrical charge. Franklin's theory explained how charges could be dispelled (such as those in Leyden jars) and how they could be passed through a chain of people. The fluid theories of electricity eventually became updated to include the effects of magnetism, and electrons (upon their discovery).
Fluid theories
In the 1700s many physical phenomena were thought of in terms of an aether, which was a fluid that could permeate matter. This idea had been used for centuries, and was the basis of thinking about physical phenomena, such as electricity, as liquids. Other 18th century examples of imponderable fluid models are Lavoisier's caloric and the magnetic fluids of Coulomb and Aepinus.
Two-fluid theory
By the 18th century, one of a few theories explaining observed electrical phenomena was the two-fluid theory. This theory is generally attributed to Charles François de Cisternay du Fay. du Fay's theory suggested that electricity was composed of two liquids, which could flow through solid bodies. One liquid carried a positive charge, and the other a negative charge. When these two liquids came into contact with one another, they would produce a neutral charge. This theory dealt mainly with explaining electrical attraction and repulsion, rather than how an object could be charged or discharged.
du Fay observed this while repeating an experiment created by Otto von Guericke, wherein a thin material, such as a feather or leaf, would repel a charged object after making contact with it. du Fay observed that the “leaf-gold is first attracted by the tube; and acquires an electricity be approaching it; and of consequence is immediately repell’d by it.” This seemed to confirm for du Fay that the leaf was being pushed as a ‘current’ of electricity flowed around and through it.
Through further testing, du Fay determined that an object could hold one of two types of electricity, either vitreous or resinous electricity. He found that an object with vitreous electricity would repel another vitreous object, but would be attracted to an object with resinous electricity
Another supporter of the two-fluid theory was Christian Gottlieb Kratzenstein. He speculated also the electric charges were carried by vortices in these two fluids.
One-fluid theory
In 1746 William Watson proposed a one-fluid theory.
On 11 July 1747 Benjamin Franklin composed a letter in which he outlined his new theory. This is the first record of his theory. Franklin developed this theory mainly concentrating on the charging and discharging of bodies, as opposed to du Fay, who concentrated mainly on electrical attraction and repulsion.
Franklin's theory stated that electricity should be thought of as the movement of a single liquid, as opposed to the interaction between two liquids. A body would show signs of electricity when it held either too much, or too little of this liquid. A neutral object was therefore thought to contain a “normal” amount of this fluid. Franklin also outlined two possible states of electrification, positive and negative. He argued that a positively charged object would contain too much fluid, while a negatively charged object would contain too little fluid. Franklin was able to apply this thinking by explaining unexplained phenomena of the time, such as the Leyden jar, a basic charge storing device similar to a capacitor. He argued that the wire and inner surface became positively charged, while the outer surface became negatively charged. This caused an imbalance in fluid, and a person touching both portions of the jar allowed the fluid to flow normally.
Despite being a simpler theory, it was heavily debated whether electricity was made up of one fluid or two for a century.
Significance of the one-fluid theory
The one-fluid theory shows a significant shift in how the scientific community thought about electricity. Prior to Franklin's theory, there were many competing theories on how electricity functioned. Franklin's theory soon became the most widely accepted at the time. Franklin's theory is also notable, because it is the first theory that viewed electricity as the accumulation of 'charge' from elsewhere, rather than an excitation of the matter already present in an object.
Franklin's theory also provides the basis for conventional current, the thinking of electricity as being the movement of positive charges. Franklin arbitrarily thought of his electrical fluid as being of a positive charge, and therefore all thought was done in the frame of mind of a positive flow. This permeated the mindset of the scientific community to the point that electricity is still being thought of as the flow of positive charges, despite proof that electricity moving through metals (the most common conductor) is done by the electron, or negative particle.
Franklin was also the first person to suggest that lightning was in fact electricity. Franklin suggested that lightning was just a larger version of the small sparks that appeared between two charged objects. He therefore predicted that lightning could be shaped and directed by using a pointed conductor. This was the basis for his famous kite experiment.
Shortcomings of the theory
Although the one-fluid theory marked a significant advance in discussions of electricity, it did have some deficiencies. Franklin created the theory to explain discharges, an aspect which had been mostly ignored previously. While it explained this well, it was not able to fully explain electrical attraction and repulsion. It made sense that two objects with too much fluid would push away from each other, and why two objects with largely different amounts of fluid would pull towards each other. However, it didn't make sense that two objects with no fluid would repel each other. Too little fluid should not cause a repulsion.
Another difficulty with this model of electricity is that it ignores the interactions between electricity and magnetism. Although this relationship was not well-studied at the time, it was known that there was some connection between the two phenomena. Franklin's model makes no reference to these forces, and makes no attempt to explain them.
Although fluid theory was the predominant viewpoint for a time, it was eventually replaced by more modern theories, specifically one which used observations about attractions between current-carrying wires to describe the magnetic effects between them.
Connections to magnetism
Neither du Fay nor Franklin described the effects of magnetism in their theories, with both concerning themselves only with electrical effects. However, theories on magnetism followed a very similar pattern as those on electricity. Charles Coulomb described magnets as containing two magnetic fluids, aural and boreal, which could combine to describe magnetic attraction and repulsion. The related one-fluid theory for magnetism was proposed by Franz Aepinus, who described magnets as containing too much or too little magnetic fluid.
These theories of electricity and magnetism were thought of as two separate phenomena, until Hans Christian Ørsted noticed that a compass needle would deflect from magnetic north when placed near an electric current. This caused him to develop theories that electricity and magnetism were interrelated and could affect one another. Ørsted's work was the basis for a theory by French physicist André-Marie Ampère, which unified the relation between magnetism and electricity.
See also
General
Contact tension
Hydraulic analogy
Imponderable fluid
Histories
History of the electric charge
History of electrochemistry
References
External links
A letter from Charles-François de Cisternay Du Fay concerning electricity. , Phil. Trans. 38 (1734) 258-266
History of electricity. Both kinds of electricity. Attraction and repulsion. The Dufay's law.
Electricity
Obsolete theories in physics | Fluid theory of electricity | [
"Physics"
] | 1,639 | [
"Theoretical physics",
"Obsolete theories in physics"
] |
2,588,054 | https://en.wikipedia.org/wiki/Biomechatronics | Bio-mechatronics is an applied interdisciplinary science that aims to integrate biology and mechatronics (electrical, electronics, and mechanical engineering). It also encompasses the fields of robotics and neuroscience. Biomechatronic devices cover a wide range of applications, from developing prosthetic limbs to engineering solutions concerning respiration, vision, and the cardiovascular system.
How it works
Bio-mechatronics mimics how the human body works. For example, four different steps must occur to lift the foot to walk. First, impulses from the brain's motor center are sent to the foot and leg muscles. Next, the nerve cells in the feet send information, providing feedback to the brain, enabling it to adjust the muscle groups or amount of force required to walk across the ground. Different amounts of energy are applied depending on the type of surface being walked across. The leg's muscle spindle nerve cells then sense and send the position of the floor back up to the brain. Finally, when the foot is raised to step, signals are sent to muscles in the leg and foot to set it down.
Biosensors
Biosensors detect what the user wants to do or their intentions and motions. In some devices, the information can is relayed by the user's nervous or muscle system. This information is related by the biosensor to a controller, which can be located inside or outside the biomechatronic device. In addition biosensors receive information about the limb position and force from the limb and actuator. Biosensors come in a variety of forms. They can be wires which detect electrical activity, needle electrodes implanted in muscles, and electrode arrays with nerves growing through them.
Electromechanical sensors
The purpose of the mechanical sensors is to measure information about the biomechatronic device and relate that information to the biosensor or controller.
Additionally, many sensors are being used at schools, such as Case Western Reserve University, the University of Pittsburgh, Johns Hopkins University, among others, with the goal of recording physical stimuli and converting them to neural signals for a subarea of bio-mechatronics called neuro-mechatronics.
Controller
The controller in a biomechatronic device relays the user's intentions to the actuators. It also interprets feedback information to the user that comes from the biosensors and mechanical sensors. The other function of the controller is to control the biomechatronic device's movements.
Actuator
The actuator can be an artificial muscle but it can be any part of the system which provides an outward effect based on the control input. For a mechanical actuator, its job is to produce force and movement. Depending on whether the device is orthotic or prosthetic the actuator can be a motor that assists or replaces the user's original muscle. Many such systems actually involve multiple actuators.
Research
Bio-mechatronics is a rapidly growing field but as of now there are very few labs which conduct research. The Shirley Ryan AbilityLab (formerly the Rehabilitation Institute of Chicago), University of California at Berkeley, MIT, Stanford University, and University of Twente in the Netherlands are the researching leaders in bio-mechatronics. Three main areas are emphasized in the current research.
Analyzing human motions, which are complex, to aid in the design of biomechatronic devices
Studying how electronic devices can be interfaced with the nervous system.
Testing the ways to use living muscle tissue as actuators for electronic devices
Analyzing motions
A great deal of analysis over human motion is needed because human movement is very complex. MIT and the University of Twente are both working to analyze these movements. They are doing this through a combination of computer models, camera systems, and electromyograms.
Neural Interfacing
Interfacing allows bio-mechatronics devices to connect with the muscle systems and nerves of the user in order send and receive information from the device. This is a technology that is not available in ordinary orthotics and prosthetics devices. Groups at the University of Twente and University of Malaya are making drastic steps in this department. Scientists there have developed a device which will help to treat paralysis and stroke victims who are unable to control their foot while walking. The researchers are also nearing a breakthrough which would allow a person with an amputated leg to control their prosthetic leg through their stump muscles.
Researchers at MIT have developed a tool called the MYO-AMI system which allows for proprioceptive feedback (position sensing) in the lower extremity (legs, transtibial). Still others focus on interfacing for the upper extremity (Functional Neural Interface Lab, CWRU). There are both CNS and PNS approaches further subdivided into brain, spinal cord, dorsal root ganglion, spinal/cranial nerve, and end effector techniques and some purely surgical techniques with no device component (see Targeted Muscle Reinnervation).
MIT research
Hugh Herr is the leading biomechatronic scientist at MIT. Herr and his group of researchers are developing a sieve integrated circuit electrode and prosthetic devices that are coming closer to mimicking real human movement. The two prosthetic devices currently in the making will control knee movement and the other will control the stiffness of an ankle joint.
Robotic fish
As mentioned before Herr and his colleagues made a robotic fish that was propelled by living muscle tissue taken from frog legs. The robotic fish was a prototype of a biomechatronic device with a living actuator. The following characteristics were given to the fish.
A styrofoam float so the fish can float
Electrical wires for connections
A silicone tail that enables force while swimming
Power provided by lithium batteries
A microcontroller to control movement
An infrared sensor enables the microcontroller to communicate with a handheld device
Muscles stimulated by an electronic unit
Arts research
New media artists at UCSD are using bio-mechatronics in performance art pieces, such as Technesexual (more information, photos, video), a performance which uses biometric sensors to bridge the performers' real bodies to their Second Life avatars and Slapshock (more information, photos, video), in which medical TENS units are used to explore intersubjective symbiosis in intimate relationships.
Growth
The demand for biomechatronic devices are at an all-time high and show no signs of slowing down. With increasing technological advancement in recent years, biomechatronic researchers have been able to construct prosthetic limbs that are capable of replicating the functionality of human appendages. Such devices include the "i-limb", developed by prosthetic company Touch Bionics, the first fully functioning prosthetic hand with articulating joints, as well as Herr's PowerFoot BiOM, the first prosthetic leg capable of simulating muscle and tendon processes within the human body. Biomechatronic research has also helped further research towards understanding human functions. Researchers from Carnegie Mellon and North Carolina State have created an exoskeleton that decreases the metabolic cost of walking by around 7 percent.
Many biomechatronic researchers are closely collaborating with military organizations. The US Department of Veterans Affairs and the Department of Defense are giving funds to different labs to help soldiers and war veterans.
Despite the demand, however, biomechatronic technologies struggle within the healthcare market due to high costs and lack of implementation into insurance policies. Herr claims that Medicare and Medicaid specifically are important "market-breakers or market-makers for all these technologies," and that the technologies will not be available to everyone until the technologies get a breakthrough. Biomechatronic devices, although improved, also still face mechanical obstructions, suffering from inadequate battery power, consistent mechanical reliability, and neural connections between prosthetics and the human body.
See also
Artificial cardiac pacemaker
Artificial muscle
Biomechanics
Biomedical engineering
Bionics
Brain–computer interface
Cybernetics
Cyberware
Gerontechnology
Mechatronics
Neural engineering
Neuroprosthetics
Orthotics
Prosthetics
Notes
External links
Biomechatronics lab at MIT
Biomechatronics lab at the Rehabilitation Institute of Chicago
Biomechatronics lab at University of Twente
Experimental Biomechatronics Lab at Carnegie Mellon University
Laboratory for Biomechatronics at the University of Lübeck
Biomechatronics laboratory at Imperial College London
Laboratory for Biomechatronics at the Technische Universität Ilmenau
Electromechanical engineering
Health care robotics | Biomechatronics | [
"Engineering"
] | 1,784 | [
"Electrical engineering",
"Electromechanical engineering",
"Mechanical engineering by discipline"
] |
2,588,316 | https://en.wikipedia.org/wiki/ETFE | Ethylene tetrafluoroethylene (ETFE) is a fluorine-based plastic. It was designed to have high corrosion resistance and strength over a wide temperature range. ETFE is a polymer and its source-based name is poly (ethene-co-tetrafluoroethene). It is also known under the DuPont brand name Tefzel and is sometimes referred to as 'Teflon Film'. ETFE has a relatively high melting temperature and excellent chemical, electrical and high-energy radiation resistance properties.
Properties
Useful comparison tables of PTFE against FEP, PFA and ETFE can be found on Chemours' website, listing the mechanical, thermal, chemical and electrical properties of each, side by side. ETFE is effectively the high-strength version of the other three in this group.
ETFE film is self-cleaning (due to its nonstick properties) and recyclable. As a film for roofing it can be stretched and still be taut if some variation in size, such as that caused by thermal expansion, were to occur. Employing heat welding, tears can be repaired with a patch or multiple sheets assembled into larger panels.
ETFE has an approximate tensile strength of 42 MPa (6100 psi), with a working temperature range of to ( to or to ).
ETFE resins are resistant to ultraviolet light. An artificial weathering test (comparable to 30 years’ exposure) produced no filtering and almost no signs of film deterioration.
ETFE systems can control light transmission through the application of plasma coatings, varnishes or printed frit patterns. Thermal and acoustic insulation can be incorporated into an ETFE structure via the use of multi-layer systems which use low-pressure air pumps to create ETFE "cushions". For instance u value of ETFE single layer, double and three layers are approximately: 5.6, 2.5 and 1.9 W/m2.k respectively while concerning g value of etfe cushion or SHGC in ETFE systems it can vary between 0.2 to 0.95 using frits and for further info about SHGC in ETFE refer to
Applications
ETFE was developed by DuPont in the 1970s initially as a lightweight, heat resistant film in the aerospace industry. From its development it was largely used infrequently in agricultural and architectural projects. ETFE's first large-scale use architecturally came in 2001 at the Eden Project where ETFE was selected as it can be printed and layered to control solar conditions and because it was found to have a low friction coefficient, which saves on maintenance as dust and dirt do not stick.
An example of its use is as pneumatic panels to cover the outside of the football stadium Allianz Arena or the Beijing National Aquatics Centre (a.k.a. the Water Cube of the 2008 Olympics) – the world's largest structure made of ETFE film (laminate). The panels of the Eden Project are also made from ETFE, and the Tropical Islands have a 20,000 m2 window made from this translucent material.
Another key use of ETFE is for the covering of electrical and fiber-optic wiring used in high-stress, low-fume-toxicity and high-reliability situations. Aircraft, spacecraft and motorsport wiring are primary examples. Some small cross-section wires like the wire used for the wire-wrap technique are coated with ETFE.
As a dual laminate, ETFE can be bonded with FRP as a thermoplastic liner and used in pipes, tanks, and vessels for additional corrosion protection.
ETFE is commonly used in the nuclear industry for tie or cable wraps and in the aviation and aerospace industries for wire coatings. This is because ETFE has better mechanical toughness than PTFE. In addition, ETFE exhibits a high-energy radiation resistance and can withstand moderately high temperatures for a long period.
Commercially deployed brand names of ETFE include Tefzel by DuPont, Fluon by Asahi Glass Company, Neoflon ETFE by Daikin, and Texlon by Vector Foiltec. Sumitomo Electric developed an aluminium-ETFE composite marketed as . Additionally, now a day the commercial use of architectural ETFE as skylight or facade materials has become very popular all over the world not only in Europe, in middle east for instance many shopping malls, sports and cultural mega venues developments has utilized ETFE for example recent huge greenhouse park development in Abu Dhabi (Mawasem Park – Green House – Abu Dhabi House) managed by Fabrix360 ETFE expert
Due to its high temperature resistance ETFE is also used in film mode as a mold-release film. ETFE film offered by Guarniflon or Airtech International and Honeywell is used in aerospace applications such as carbon fiber pre-preg curing as a release film for molds or hot high-pressure plates.
Notable buildings
Notable buildings and designs using ETFE as a significant architectural element:
Allianz Arena, Munich, Germany
Beijing National Aquatics Centre, (the Water Cube) Beijing, China
Eden Project, Cornwall, United Kingdom
Khan Shatyr Entertainment Center, Astana, Kazakhstan
National Space Centre, Leicester, United Kingdom
Cuauhtémoc Stadium, Puebla, Mexico
Midland Metropolitan University Hospital, Smethwick, Birmingham, United Kingdom
U.S. Bank Stadium, Minneapolis, Minnesota, United States
SoFi Stadium. Inglewood, California, United States
Allegiant Stadium, Las Vegas, Nevada, United States
Hard Rock Stadium, Miami Gardens, Florida, United States
Banc of California Stadium. Los Angeles, California, United States
Avenues Phase-III, Al-Rai, Kuwait
Avenues Phase IV & IVB, Al-Rai, Kuwait
Dworzec Tramwajowy Centrum, tram station in Łódź, Poland
Solaris, Clamart, France
Discovery College, Lantau Island, Hong Kong
Green 18, Hong Kong Science Park, Hong Kong
Pavilion, Alnwick Castle, Alnwick, United Kingdom
BC Place, Vancouver, Canada,
River Culture Pavillon The ARC, Daegu, South Korea
Munich's municipal waste management department, Munich, Germany (2011)
Beijing National Stadium, Beijing, China
FestiveWalk, Resorts World at Sentosa, Singapore
Dolce Vita Tejo Shopping Centre, Amadora, Lisbon, Portugal
roof, dedicated underground rail station at the Heathrow Airport Terminal 5, London, United Kingdom
Manchester Victoria station concourse, Manchester, United Kingdom
Forsyth Barr Stadium at University Plaza, Dunedin, New Zealand
Islazul Shopping Centre, Madrid, Spain
Kansas City Power & Light District, Kansas City, Missouri, United States
South Campus skylight structures, Art Center College of Design, Pasadena, California, United States
Tanaka Business School, London, United Kingdom
The Shed (Hudson Yards), Manhattan, New York, United States
Tropical Islands, Brandenburg, Germany
Barnsley Interchange, Barnsley, United Kingdom
The Mall Athens, Athens, Greece
Newport railway station, Newport, United Kingdom
The Elements, Livingston, United Kingdom
Experimental Media and Performing Arts Center, Rensselaer Polytechnic Institute, Troy, New York, United States
Arena Pernambuco, Recife, Brazil
Sandton City, Sandton, South Africa
Key West Shopping Centre, Krugersdorp, South Africa
Oceanus Casino, Macau, Special Administrative Region of China
Masdar City, Abu Dhabi, United Arab Emirates
ISS Building, Lancaster University
Empire City Casino, Yonkers, New York, United States
Institute of Technical Education, Singapore (2012)
The SSE Hydro, Glasgow, Scotland
Anaheim Regional Transportation Intermodal Center, California (12-13-14)
National Stadium, Singapore
Orto Botanico di Padova Biodiversity Garden roof, Padua, Italy
Guangzhou South Railway Station, China
Yujiapu Railway Station, China
[Persian Garden, Iran Mall, Tehran, Iran
Anoeta Stadium, San Sebastián, Spain
Ed Kaplan Family Institute for Innovation and Tech Entrepreneurship, Illinois Institute of Technology, Chicago, Illinois
Mercedes-Benz Stadium, Atlanta, Georgia (2017)
US Embassy ETFE Facade, London, UK (2017)
Czech Institute of Informatics, Robotics and Cybernetics, Prague, Czech Republic (2017)
Tron Lightcycle Power Run, Shanghai Disneyland, China (2017)
UQ Global Change Institute Living Building, Brisbane, Australia (2013)
Lakhta Center, Saint Petersburg, Russia (2019)
Haneda Airport Terminal 2, International Flight Facilities, Tokyo, Japan (2020)
Macquarie University Arts Precinct ETFE Roof, Sydney, Australia (2020)
Hayward Field, University of Oregon, Eugene, Oregon (2020)
Rhodes Central Commercial Development, Sydney, Australia (2021)
Mawasem Park – Green House, Abu Dhabi, United Arab Emirates (2022)
Under construction
Jungle Exhibit, Sedgwick County Zoo, Wichita, Kansas (2015), United States
West End Stadium, Cincinnati, Ohio, United States
Dockside Pavilion, Sydney, Australia (2014)
Baku Olympic Stadium, Baku, Azerbaijan (2015)
Australian Embassy, Jakarta, Indonesia (2014)
Wharf Retail, Bluewaters Island, Dubai, United_Arab_Emirates (2016)
Carlisle Railway Station, Carlisle, United Kingdom (2017)
Oxigeno, San Francisco, Heredia Province, Costa Rica
Jakarta International Stadium, Jakarta, Indonesia
Primark Birmingham, United Kingdom
Boston Logan Airport (terminal E), Boston, Massachusetts, United States (2021–2022)
Tron Lightcycle Power Run, Walt Disney World, United States (2022)
One New Zealand Stadium (Te Kaha), Christchurch, New Zealand
References
External links
ETFE Resins – Chemours Teflon
Building materials
Copolymers
Fluoropolymers
Plastics
Thermoplastics | ETFE | [
"Physics",
"Engineering"
] | 1,976 | [
"Building engineering",
"Unsolved problems in physics",
"Plastics",
"Architecture",
"Construction",
"Materials",
"Amorphous solids",
"Matter",
"Building materials"
] |
2,589,104 | https://en.wikipedia.org/wiki/Bischler%E2%80%93Napieralski%20reaction | The Bischler–Napieralski reaction is an intramolecular electrophilic aromatic substitution reaction that allows for the cyclization of β-arylethylamides or β-arylethylcarbamates. It was first discovered in 1893 by August Bischler and , in affiliation with Basel Chemical Works and the University of Zurich. The reaction is most notably used in the synthesis of dihydroisoquinolines, which can be subsequently oxidized to isoquinolines.
Mechanisms
Two types of mechanisms have appeared in the literature for the Bischler–Napieralski reaction. Mechanism I involves a dichlorophosphoryl imine-ester intermediate, while Mechanism II involves a nitrilium ion intermediate (both shown in brackets). This mechanistic variance stems from the ambiguity over the timing for the elimination of the carbonyl oxygen in the starting amide.
In Mechanism I, the elimination occurs with imine formation after cyclization; while in Mechanism II, the elimination yields the nitrilium intermediate prior to cyclization. Currently, it is believed that reaction conditions affect the prevalence of one mechanism over the other (see reaction conditions).
In certain literature, Mechanism II is augmented with the formation of an imidoyl chloride intermediate produced by the substitution of chloride for the Lewis acid group just prior to the nitrilium ion.
Because the dihydroisoquinoline nitrogen is basic, neutralization is necessary to obtain the deprotonated product.
General reaction reagents and conditions
The Bischler–Napieralski reaction is carried out in refluxing acidic conditions and requires a dehydrating agent. Phosphoryl chloride (POCl3) is widely used and cited for this purpose. Additionally, SnCl4 and BF3 etherate have been used with phenethylamides, while Tf2O and polyphosphoric acid (PPA) have been used with phenethylcarbamates. For reactants lacking electron-donating groups on the benzene ring, phosphorus pentoxide (P2O5) in refluxing POCl3 is most effective. Depending on the dehydrating reagent used, the reaction temperature varies from room temperature to 100 °C.
Related reactions
Several reactions that are related to the Bischler–Napieralski reaction are known. In the Morgan–Walls reaction, the linker between the aromatic ring and the amide nitrogen is an ortho-substituted aromatic ring. This N-acyl 2-aminobiphenyl cyclizes to form a phenanthridine. The Pictet–Spengler reaction proceeds from a β-arylamine via condensation with an aldehyde. These two components form an imine, which then cyclizes to form a tetrahydroisoquinoline.
Pictet–Gams reaction
The Pictet–Gams reaction proceeds from an β-hydroxy-β-phenethylamide. It involves an additional dehydration under the same conditions as the cyclization, giving an isoquinoline. As with the Bischler–Napieralski reaction, the Pictet–Gams reaction requires a strongly dehydrating Lewis acid, such as phosphoryl chloride or phosphorus pentoxide.
Structural effects and alternate products
There are documented variations on the Bischler–Napieralski reaction whose products differ in virtue of either the structure of the initial reactant, the tailoring of reaction conditions, or both. For example, research done by Doi and colleagues suggests that the presence or absence of electron-donating groups on the aryl portion of β-arylethylamides and the ratio of dehydrating reagents influence the patterns of ring closure via electrophilic aromatic substitution, leading to two possible products (see below). Other research on the variations on the Bischler-Napieralski Reaction have investigated the effects of nitro and acetal aryl groups on product formation (see references).
See also
Pomeranz–Fritsch reaction
References
Nitrogen heterocycle forming reactions
Heterocycle forming reactions
Name reactions | Bischler–Napieralski reaction | [
"Chemistry"
] | 866 | [
"Name reactions",
"Ring forming reactions",
"Heterocycle forming reactions",
"Organic reactions"
] |
2,589,204 | https://en.wikipedia.org/wiki/Halogenated%20ether | Halogenated ethers are a subcategory of ethers—organic chemicals that contain an oxygen atom connected to two alkyl groups or similar structures. An example of an ether is the solvent diethyl ether. Halogenated ethers differ from other ethers because there are one or more halogen atoms—fluorine, chlorine, bromine, or iodine—as substituents on the carbon groups. . Examples of commonly used halogenated ethers include isoflurane, sevofluorane and desflurane.
History
An ideal inhaled anesthetic wasn't found until 1950. Volatile substances like diethyl ether, which have severe risks of nausea, were used before. Diethyl ether has the unfortunate disadvantage of being extremely flammable, especially in the presence of enriched oxygen mixtures.
James Young Simpson, an obstetrics surgeon, used ethers to help women relieve their labor pains but ultimately deemed them unsuitable due to their drawbacks. Simpson and his friends tested the halogenated hydrocarbon, chloroform, as a substitute inhalation agent during a house party. They woke from unconsciousness pleasantly surprised with its effectiveness. This was the first recorded successful use of halogenated hydrocarbons as anesthetics.
Applications
Anesthesia
Inhaled agents like diethyl ether are critical in anesthesia. Diethyl ether initially replaced non-flammable (but more toxic) halogenated hydrocarbons like chloroform and trichloroethylene. Halothane is a halogenated hydrocarbon anesthetic agent that was introduced into clinical practice in 1956. Due to its ease of use and improved safety profile with respect to organ toxicity, halothane quickly replaced chloroform and trichloroethylene.
The anesthesia practice was significantly improved later in the 1950s with the introduction of halogenated ethers, like isoflurane, enflurane, and sevoflurane. Since its introduction in the 1980s, isoflurane has been widely used due to its decreased risk of hepatotoxicity and better hemodynamic stability when compared to halothane. The 1990s saw the development of sevoflurane, which was especially helpful in pediatric anesthesia because it provided even faster induction and recovery profiles.
All inhalation anesthetics in current clinical use are halogenated ethers, except for halothane (which is a halogenated hydrocarbon or haloalkane), nitrous oxide, and xenon.
Inhalation anesthetics are vaporized and mixed with other gases prior to their inhalation by the patient before or during surgery. These other gases always include oxygen or air, but may also include other gases such as nitrous oxide or helium. In most surgical situations, other drugs such as opiates are used for pain and skeletal muscle relaxants are used to cause temporary paralysis. Additional drugs such as midazolam may be used to produce amnesia during surgery. Although newer intravenous anesthetics (such as propofol) have increased the options of anesthesiologists, halogenated ethers remain a mainstay of general anesthesia.
Polymers
Perfluorinated epoxides can be used as used as comonomers for the production of polytetrafluoroethylene (PTFE).
Perfluorinated epoxides are a class of epoxides where all the hydrogen atoms on a carbon chain are replaced with fluorine atoms. The fluorine ensures compatibility with PTFE, while the epoxy group enables chemical bonding during polymerization. When used as comonomers, they can alter the microstructure of PTFE, reducing crystallinity and improving flexibility and toughness. This makes the polymer more suitable for applications like seals and gaskets, which require resilience under stress. Furthermore, perfluorinated epoxides enable the tailoring of specific functional properties, such as low surface energy, which is essential for applications requiring non-stick or low-friction surfaces.
Flame Retardant
Halogenated ethers play a significant role in enhancing the thermal stability and fire resistance of polymers. When applied to materials, they are effective in preventing items from catching fire because of the chemical's resistance to decomposition and effective flame suppression properties.
Most halogenated ethers contain bromine or chlorine. Brominated compounds are particularly effective because they release bromine radicals when exposed to heat. These radicals interrupt the combustion process by reacting with free radicals in the flame, thereby suppressing fire propagation. Chlorinated ethers can also function similarly by releasing chlorine radicals. Both types of halogens contribute to the flame-retardant properties, but brominated ethers are often favoured for their higher efficiency and lower required concentrations compared to their chlorinated counterparts.
Decabromodiphenyl ether (deca-BDE), a type of Polybrominated diphenyl ether (PBDEs), is a brominated flame retardant. It was widely used in polystyrene, acrylonitrile butadiene styrene (ABS), flexible polyurethane foam, textile coatings, wire/cable insulation, electrical connectors, and other interior parts. Decabromodiphenyl ether is one of many halogenated flame retardants that are now are heavily regulated or banned in many regions because of bioaccumulation and potential toxicity hazards. Most industries are now transitioning to alternative, less hazardous flame retardants. However, because of the widespread use of these chemicals in many products, it is anticipated that they will continue to persist in the environment.
Tetrabromobisphenol A bis(2,3-dibromopropyl) ether (TBBPA-DBPE) is another type of brominated flame retardant. It is widely used in electronic casings and circuit boards due to its high efficiency in reducing flammability. TBBPA-DBPE is also a flame retardant in plastics, paper, and textiles, and as plasticizer in adhesives and coatings.
Common halogenated ethers
Toxicology
Respiratory Depression
Halogenated ethers can cause respiratory depression by reducing the body's response to carbon dioxide and hypoxia, which affects breathing rates and depth. Some, like desflurane and isoflurane, are also known for causing airway irritation. This can cause coughing, breath-holding, or laryngospasm, particularly during inhalational induction of anesthesia. Sevoflurane has minimal airway irritation and is generally preferred for induction, particularly in children or those with sensitive airways.
Environmental Impact
Greenhouse Gas Emissions
Halogenated ethers are greenhouse gases and contribute to global warming. Compounds like desflurane and isoflurane have high global warming potentials (GWP), which measure their heat-trapping abilities relative to carbon dioxide (CO₂). The GWP of a halogenated anesthetic is up to 2,000 times greater than CO₂. The use of these anesthetics in healthcare is a significant contributor to hospital-related greenhouse gas emissions. There is a growing focus on identifying lower-GWP alternatives or enhancing recovery and recycling technologies for anesthetic gases.
Persistence and Bioaccumulation
Halogenated ethers can persist in the atmosphere for years due to their stability. These compounds do not readily degrade and thus remain in circulation long after their release, adding to the atmospheric burden of greenhouse gases. They are generally not bioaccumulative due to its high volatility and low tendency to dissolve in water or adhere to biological tissues, but the persistent nature of these compounds raises concerns for long-term environmental effects. This is especially concerning in areas surrounding healthcare facilities where they may be routinely released.
See also
Anesthesia
Ether
Halogen
Halogenation
Hydrocarbon
References
General anesthetics
Ethers
Organohalides
GABAA receptor positive allosteric modulators
NMDA receptor antagonists | Halogenated ether | [
"Chemistry"
] | 1,687 | [
"Organic compounds",
"Functional groups",
"Organohalides",
"Ethers"
] |
6,213,976 | https://en.wikipedia.org/wiki/Elongation%20factor | Elongation factors are a set of proteins that function at the ribosome, during protein synthesis, to facilitate translational elongation from the formation of the first to the last peptide bond of a growing polypeptide. Most common elongation factors in prokaryotes are EF-Tu, EF-Ts, EF-G. Bacteria and eukaryotes use elongation factors that are largely homologous to each other, but with distinct structures and different research nomenclatures.
Elongation is the most rapid step in translation. In bacteria, it proceeds at a rate of 15 to 20 amino acids added per second (about 45-60 nucleotides per second). In eukaryotes the rate is about two amino acids per second (about 6 nucleotides read per second). Elongation factors play a role in orchestrating the events of this process, and in ensuring the high accuracy translation at these speeds.
Nomenclature of homologous EFs
In addition to their cytoplasmic machinery, eukaryotic mitochondria and plastids have their own translation machinery, each with their own set of bacterial-type elongation factors. In humans, they include TUFM, TSFM, GFM1, GFM2, GUF1; the nominal release factor MTRFR may also play a role in elongation.
In bacteria, selenocysteinyl-tRNA requires a special elongation factor SelB () related to EF-Tu. A few homologs are also found in archaea, but the functions are unknown.
As a target
Elongation factors are targets for the toxins of some pathogens. For instance, Corynebacterium diphtheriae produces diphtheria toxin, which alters protein function in the host by inactivating elongation factor (EF-2). This results in the pathology and symptoms associated with diphtheria. Likewise, Pseudomonas aeruginosa exotoxin A inactivates EF-2.
References
Further reading
Alberts, B. et al. (2002). Molecular Biology of the Cell, 4th ed. New York: Garland Science. .
Berg, J. M. et al. (2002). Biochemistry, 5th ed. New York: W.H. Freeman and Company. .
Singh, B. D. (2002). Fundamentals of Genetics, New Delhi, India: Kalyani Publishers. .
External links
nobelprize.org Explaining the function of eukaryotic elongation factors
Protein biosynthesis | Elongation factor | [
"Chemistry"
] | 545 | [
"Protein biosynthesis",
"Gene expression",
"Biosynthesis"
] |
6,214,158 | https://en.wikipedia.org/wiki/Tungsten%20nitride | Tungsten nitride (W2N, WN, WN2) is an inorganic compound, a nitride of tungsten. It is a hard, solid, brown-colored ceramic material that is electrically conductive and decomposes in water.
It is used in microelectronics as a contact material, for conductive layers, and barrier layers between silicon and other metals, e.g. tungsten or copper. It is less commonly used than titanium nitride or tungsten films.
Tungsten nitride forms together with tungsten dioxide, tungsten trioxide, and tungsten pentoxide when an incandescent light bulb breaks while the filament is heated.
Tungsten silicide is another material with similar use.
References
Tungsten compounds
Nitrides
Ceramic materials | Tungsten nitride | [
"Physics",
"Engineering"
] | 168 | [
"Materials stubs",
"Materials",
"Ceramic materials",
"Ceramic engineering",
"Matter"
] |
6,214,840 | https://en.wikipedia.org/wiki/Nuclear%20graphite | Nuclear graphite is any grade of graphite, usually synthetic graphite, manufactured for use as a moderator or reflector within a nuclear reactor. Graphite is an important material for the construction of both historical and modern nuclear reactors because of its extreme purity and ability to withstand extremely high temperatures.
History
Nuclear fission, the creation of a nuclear chain reaction in uranium, was discovered in 1939 following experiments by Otto Hahn and Fritz Strassman, and the interpretation of their results by physicists such as Lise Meitner and Otto Frisch. Shortly thereafter, word of the discovery spread throughout the international physics community.
In order for the fission process to chain react, the neutrons created by uranium fission must be slowed down by interacting with a neutron moderator (an element with a low atomic weight, that will "bounce", when hit by a neutron) before they will be captured by other uranium atoms. By late 1939, it was generally known that heavy water might be used as a moderator. The highest-purity graphite then commercially available (so called electro-graphite) was dismissed by the Germans and the British as a possible moderator because it contained boron and cadmium impurities. However, graphite of high enough purity was developed in the early 1940's in the United States, and this then was utilized in the first and subsequent nuclear reactors for the Manhattan Project.
In February 1940, using funds that were allocated partly as a result of the Einstein-Szilard letter to President Roosevelt, Leo Szilard purchased several tons of graphite from the Speer Carbon Company and from the National Carbon Company (the National Carbon Division of the Union Carbide and Carbon Corporation in Cleveland, Ohio) for use in Enrico Fermi's first fission experiments, the so-called exponential pile. Fermi writes that "The results of this experiment was [sic] somewhat discouraging" presumably because of the absorption of neutrons by some unknown impurity. So, in December 1940 Fermi and Szilard met with Herbert G. MacPherson and V. C. Hamister at National Carbon to discuss the possible existence of impurities in graphite. During this conversation it became clear that minute quantities of boron impurities were the source of the problem.
As a result of this meeting, over the next two years, MacPherson and Hamister developed thermal and gas extraction purification techniques at National Carbon for the production of boron-free graphite. The resulting product was designated AGOT Graphite ("Acheson Graphite Ordinary Temperature") by National Carbon, and it was "the first true nuclear grade graphite".
During this period, Fermi and Szilard purchased graphite from several manufacturers with various degrees of neutron absorption cross section: AGX graphite from National Carbon Company with 6.68 mb (millibarns) cross section, US graphite from United States Graphite Company with 6.38 mb cross section, Speer graphite from the Speer Carbon Company with 5.51 mb cross section, and when it became available, AGOT graphite from National Carbon, with 4.97 mb cross section. By November 1942 National Carbon had shipped 250 tons of AGOT graphite to the University of Chicago where it became the primary source of graphite to be used in the construction of Fermi's Chicago Pile-1, the first nuclear reactor to generate a sustained chain reaction (December 2, 1942). In early 1943 AGOT graphite was used to build the X-10 Graphite Reactor at Clinton Engineer Works in Tennessee and the first reactors at the Hanford Site in Washington, for the production of plutonium during and after World War II. The AGOT process and its later refinements became standard techniques in the manufacture of nuclear graphite.
The neutron cross section of graphite was investigated during the Second World War in Germany by Walter Bothe, P. Jensen, and Werner Heisenberg. The purest graphite available to them was a product from the Siemens Plania company, which exhibited a neutron absorption cross section of about 6.4 mb to 7.5 mb. Heisenberg therefore decided that graphite would be unsuitable as a moderator in a reactor design using natural uranium. Consequently, the German effort to create a chain reaction involved attempts to use heavy water, an expensive and scarce alternative, made all the more difficult to acquire as a consequence of the Norwegian heavy water sabotage by Norwegian and Allied forces. Writing as late as 1947, Heisenberg still did not understand that the only problem with graphite was the boron impurity.
After testing indigenous electro-graphite, Soviet scientists were able to procure and test American Acheson Graphite in 1943 and subsequently reproduced the technology.
Graphite has also recently been used in nuclear fusion reactors such as the Wendelstein 7-X. As of experiments published in 2019, graphite in elements of the stellarator's wall and a graphite island divertor have greatly improved plasma performance within the device, yielding better control over impurity and heat exhaust, and long high-density discharges.
Wigner effect
In December 1942 Eugene Wigner suggested that neutron bombardment might introduce dislocations and other damage in the molecular structure of materials such as the graphite moderator in a nuclear reactor. The resulting buildup of energy in the material became a matter of concern The possibility was suggested that graphite bars might fuse together as chemical bonds at the surface of the bars when opened and closed again. Even the possibility that the graphite parts might very quickly break into small pieces could not be ruled out. However, the first power-producing reactors (X-10 Graphite Reactor and Hanford B Reactor) had to be built without such knowledge. Cyclotrons, which were the only fast neutron sources available, would take several months to produce neutron irradiation equivalent to one day in B Reactor.
This was the starting point for large-scale research programmes to investigate the property changes from fast particle radiation and to predict their influence on the safety and the lifetime of graphite reactors to be built. Influences of fast neutron radiation material properties have been observed many times and in many countries after the first results emerged from the X-10 Graphite Reactor in 1944.
Specific changes to graphite when irradiated include:
Dimensional change (shrinkage and neutron-induced swelling, as well as possible hardening)
Change in elastic modulus (measured by impulse excitation technique)
Change in coefficient of thermal expansion
Change in thermal conductivity
Change in electrical resistivity
Irradiation induced creep
As the state of nuclear graphite in active reactors can only be determined at routine inspections, about every 18 months mathematical modelling of the nuclear graphite as it approaches end-of-life is important. However as only surface features can be inspected and the exact time of changes is not known, reliability modelling is especially difficult. Although catastrophic behaviour such as fusion or crumbling of graphite pieces has never occurred, large changes in many properties do result from fast neutron irradiation which need to be taken into account when graphite components of nuclear reactors are designed. Although not all effects are well understood yet, more than 100 graphite reactors have successfully operated for decades since the 1940s. In the 2010s, the collection of new material property data has improved knowledge significantly.
Manufacture
Reactor-grade graphite must be free of neutron absorbing materials, especially boron, which has a large neutron capture cross section. Boron sources in graphite include the raw materials, the packing materials used in baking the product, and even the choice of soap (for example, borax) used to launder the clothing worn by workers in the machine shop. Boron concentration in thermally purified graphite (such as AGOT graphite) can be less than 0.4 ppm, and in chemically purified nuclear graphite it is less than 0.06 ppm.
Nuclear graphite for the UK Magnox reactors was manufactured from petroleum coke mixed with coal-based binder pitch heated and extruded into billets, and then baked at 1,000 °C for several days. To reduce porosity and increase density, the billets were impregnated with coal tar at high temperature and pressure before a final bake at 2,800 °C. Individual billets were then machined into the final required shapes.
Accidents in graphite-moderated reactors
There have been two major accidents in graphite-moderated reactors, the Windscale fire and the Chernobyl disaster.
In the Windscale fire, an untested annealing process for the graphite was used, causing overheating in unmonitored areas of the core and leading directly to the ignition of the fire. The material that ignited was the canisters of metallic uranium fuel within the reactor. When the fire was extinguished, it was found that the only areas of the graphite moderator to have incurred thermal damage were those that had been close to the burning fuel canisters.
In the Chernobyl disaster, the moderator was not responsible for the primary event. Instead, a massive power excursion (exacerbated by the high and positive void coefficient of the RBMK as it was designed and used at the time) during a mishandled test caused the catastrophic failure of the reactor vessel and a near-total loss of coolant supply. The result was that the fuel rods rapidly melted and flowed together while in an extremely high power state, causing a small portion of the core to reach a state of runaway prompt criticality and leading to a massive energy release, resulting in the explosion of the reactor core and the destruction of the reactor building. The massive energy release during the primary event superheated the graphite moderator, and the disruption of the reactor vessel and building allowed the superheated graphite to come into contact with atmospheric oxygen. As a result, the graphite moderator caught fire, sending a plume of highly radioactive fallout into the atmosphere and over a very widespread area.
References
External links
Manufacturing and Production of Graphite, IAEA Nuclear Graphite Knowledge Base
Graphite Behaviour under Irradiation, IAEA Nuclear Graphite Knowledge Base
Allotropes of carbon
Neutron moderators
Nuclear technology | Nuclear graphite | [
"Physics",
"Chemistry"
] | 2,070 | [
"Nuclear technology",
"Allotropes of carbon",
"Allotropes",
"Nuclear physics"
] |
6,216,524 | https://en.wikipedia.org/wiki/SNePS | SNePS is a knowledge representation, reasoning, and acting (KRRA) system developed and maintained by Stuart C. Shapiro and colleagues at the State University of New York at Buffalo.
SNePS is simultaneously a logic-based, frame-based, and network-based KRRA system. It uses an assertional model of knowledge, in that a SNePS knowledge base (KB) consists of a set of assertions (propositions) about various entities. Its intended model is of an intensional domain of mental entities—the entities conceived of by some agent, and the propositions believed by it. The intensionality is primarily accomplished by the absence of a built-in equality operator, since any two syntactically different terms might have slightly different Fregean senses.
SNePS has three styles of inference: formula-based, derived from its logic-based personality; slot-based, derived from its frame-based personality; and path-based, derived from its network-based personality. However, all three are integrated, operating together.
SNePS may be used as a stand-alone KRR system. It has also been used, along with its integrated acting component, to implement the mind of intelligent agents (cognitive robots), in accord with the GLAIR agent architecture (a layered cognitive architecture). The SNePS Research Group often calls its agents Cassie.
SNePS as a Logic-Based System
As a logic-based system, a SNePS KB consists of a set of terms, and functions and formulas over those terms. The set of logical connectives and quantifiers extends the usual set used by first-order logics, all taking one or more arbitrarily-sized sets of arguments. In accord with the intended use of SNePS to represent the mind of a natural-language-competent intelligent agent, propositions are first-class entities of the intended domain, so formulas are actually proposition-denoting functional terms. SNePSLOG, the input-output language of the logic-based face of SNePS, looks like a naive logic in that function symbols (including "predicates"), and formulas (actually proposition-denoting terms) may be the arguments of functions and may be quantified over. The underlying SNePS, however, is a first order logic, with the user's function symbols and formulas reified.
Formula-based inference is implemented as a natural-deduction-style inference engine in which there are introduction and elimination rules for the connectives and quantifiers. SNePS formula-based inference is sound but not complete, as rules of inference that are less useful for natural language understanding and commonsense reasoning have not been implemented.
A proposition-denoting term in a SNePS KB might or might not be "asserted", that is, treated as true in the KB. The SNePS logic is a paraconsistent version of relevance logic, so that a contradiction does not imply anything whatsoever. Nevertheless, SNeBR, the SNePS Belief Revision subsystem, will notice any explicit contradiction and engage the user in a dialogue to repair it. SNeBR is an Assumption-Based Truth Maintenance System (ATMS), and removes the assertion status of any proposition whose support has been removed.
SNePS as a Frame-Based System
As a frame-based system, every SNePS functional term (including proposition-valued terms) is represented by a frame with slots and fillers. Each slot may be filled by an arbitrarily-sized set of other terms. However, cycles cannot be constructed. SNePSUL, the SNePS User Language is an input-output language for interacting with SNePS in its guise as a frame-based system.
SNePSLOG may be used in any of three modes. In two modes, the caseframe (set of slots) associated with each functional term is determined by the system. In mode 3, the user declares what caseframe is to be used for each function symbol.
In slot-based inference, any proposition-valued frame is considered to imply the frame with any of its slots filled by a subset of its fillers. In the current implementation, this is not always sound.
SNePS as a Network-Based System
As a network-based system, SNePS is a propositional semantic network,
thus the original meaning of "SNePS" as "The Semantic Network
Processing System". This view is obtained by considering every
individual constant and every functional term to be a node of the
network, and every slot to be a directed labeled arc from the
frame-node it is in to every node in its filler. In the intended
interpretation, every node denotes a mental entity, some of which are
propositions, and every proposition represented in the network is
represented by the node that denotes it. Some nodes are variables of
the SNePS logic, and they range over nodes, and only over nodes.
Path-based inference rules may be defined, although they, themselves,
are not represented in SNePS. A path-based inference rule specifies
that some labeled arc r may be inferred as present from some node n
to some other node m just in case a given path exists from n to m.
There is an extensive recursive set of path constructors available.
Components
SNIP, the SNePS Inference Package, provides the rules of inference with which SNePS deduces new assertions from an existing KB.
SNeBR, the SNePS Belief Revision package, is a component of SNePS that detects when the KB contains contradictory beliefs. When a contradiction is detected, the user is encouraged to unassert one of the contradictory beliefs by unasserting some underlying hypotheses that led to the contradiction. As a result, all propositions that had been inferred from the hypotheses that have been unasserted are also unasserted.
SNeRE, the SNePS Rational Engine, provides an acting executive and a set of frames for building up complex acts and plans from a set of system-defined and user-defined primitive actions. System-defined frames allow for the specification of sequences of acts, conditional acts, and iteration over acts, as well as believing and disbelieving propositions. SNeRE policies connect acting to inference, specifying, for example, that a certain act is to be done when a certain proposition is believed (asserted).
SNaLPS, the SNePS Natural Language Processing System, consists of a Generalized Augmented Transition Network Grammar interpreter and an English morphological analyzer and synthesizer so that natural language understanding and generation may be provided for SNePS-based agents.
Applications
SNePS has been used for a variety of KRR tasks, for natural language
understanding and generation, for commonsense reasoning, and for
cognitive robotics. It has been used in several KR courses around the
world.
Availability
SNePS is implemented as a platform-independent system in Common Lisp and is freely available.
External links
SNePS Research Group (SNeRG) homepage
Essential SNePS Readings
Complete SNeRG Bibliography
SNePS Downloads Page
Knowledge representation software
Cognitive architecture
Common Lisp (programming language) software | SNePS | [
"Engineering"
] | 1,496 | [
"Artificial intelligence engineering",
"Cognitive architecture"
] |
6,216,692 | https://en.wikipedia.org/wiki/Bone%20cement | Bone cements have been used very successfully to anchor artificial joints (hip joints, knee joints, shoulder and elbow joints) for more than half a century. Artificial joints (referred to as prostheses) are anchored with bone cement. The bone cement fills the free space between the prosthesis and the bone and plays the important role of an elastic zone. This is necessary because the human hip is acted on by approximately 10–12 times the body weight and therefore the bone cement must absorb the forces acting on the hips to ensure that the artificial implant remains in place over the long term.
Bone cement chemically is nothing more than Plexiglas (i.e. polymethyl methacrylate or PMMA). PMMA was used clinically for the first time in the 1940s in plastic surgery to close gaps in the skull. Comprehensive clinical tests of the compatibility of bone cements with the body were conducted before their use in surgery. The excellent tissue compatibility of PMMA allowed bone cements to be used for anchorage of head prostheses in the 1950s.
Today several million procedures of this type are conducted every year all over the world and more than half of them routinely use bone cements – and the proportion is increasing. Bone cement is considered a reliable anchorage material with its ease of use in clinical practice and particularly because of its proven long survival rate with cemented-in prostheses. Hip and knee registers for artificial joint replacements such as those in Sweden and Norway clearly demonstrate the advantages of cemented-in anchorage. A similar register for endoprosthesis was introduced in Germany in 2010.
Composition
Bone cements are provided as two-component materials. Bone cements consist of a powder (i.e., pre-polymerized PMMA and or PMMA or MMA co-polymer beads and or amorphous powder, radio-opacifier, initiator) and a liquid (MMA monomer, stabilizer, inhibitor). The two components are mixed and a free radical polymerization occurs of the monomer when the initiator is mixed with the accelerator. The bone cement viscosity changes over time from a runny liquid into a dough like state that can be safely applied and then finally hardens into solid hardened material. The set time can be tailored to help the physician safely apply the bone cement into the bone bed to either anchor metal or plastic prosthetic device to bone or used alone in the spine to treat osteoporotic compression fractures.
Bone cement heats up during the exothermic free-radical polymerization process, which reaches temperatures of around 82–86 °C in the body, a temperature higher than the critical level for protein denaturation in the body. This low polymerization temperature is determined by the relatively thin cement coating, which should not exceed 5 mm, and the temperature dissipation via the large prosthesis surface and the flow of blood.
The individual components of the bone cement are also known in the area of dental filler materials. Acrylate-based plastics are also used in these applications. While the individual components are not always perfectly safe as pharmaceutical additives and active substances per se, as bone cement the individual substances are either converted or fully enclosed in the cement matrix during the polymerization phase from the increase in viscosity to curing. From current knowledge, cured bone cement can now be classified as safe, as originally demonstrated during the early studies on compatibility with the body conducted in the 1950s.
More recently bone cement has been used in the spine in either vertebroplasty or kyphoplasty procedures. The composition of these types of cement is mostly based on calcium phosphate and more recently magnesium phosphate. A novel biodegradable, non-exothermic, self-setting orthopedic cement composition based on amorphous magnesium phosphate (AMP) was developed. The occurrence of undesirable exothermic reactions was avoided through using AMP as the solid precursor.
Important information for the use of bone cement
What is referred to as bone cement implantation syndrome (BCIS) is described in the literature. For a long time it was believed that the incompletely converted monomer released from bone cement was the cause of circulation reactions and embolism. However, it is now known that this monomer (residual monomer) is metabolized by the respiratory chain and split into carbon dioxide and water and excreted. Embolisms can always occur during anchorage of artificial joints when material is inserted into the previously cleared femoral canal. The result is intramedullary pressure increase, potentially driving fat into the circulation.
If the patient is known to have any allergies to constituents of the bone cement, according to current knowledge bone cement should not be used to anchor the prosthesis. Anchorage without cement - cement-free implant placement - is the alternative.
New bone cement formulations require characterization according to ASTM F451. This standard describes the test methods to assess cure rate, residual monomer, mechanical strength, benzoyl peroxide concentration, and heat evolution during cure.
Revisions
Revision is the replacement of a prosthesis. This means that a prosthesis previously implanted in the body is removed and replaced by a new prosthesis. Compared to the initial operation revisions are often more complex and more difficult, because every revision involves the loss of healthy bone substance. Revision operations are also more expensive for a satisfactory result. The most important goal is therefore to avoid revisions by using a good surgical procedure and using products with good (long-term) results.
Unfortunately, it is not always possible to avoid revisions. There can also be different reasons for revisions and there is a distinction between septic or aseptic revision. If it is necessary to replace an implant without confirmation of an infection—for example, aseptic—the cement is not necessarily removed completely. However, if the implant has loosened for septic reasons, the cement must be fully removed to clear an infection. In the current state of knowledge it is easier to remove cement than to release a well-anchored cement-free prosthesis from the bone site. Ultimately it is important for the stability of the revised prosthesis to detect possible loosening of the initial implant early to be able to retain as much healthy bone as possible.
A prosthesis fixed with bone cement offers very high primary stability combined with fast remobilization of patients. The cemented-in prosthesis can be fully loaded very soon after the operation because the PMMA gets most of its strength within 24 hours. The necessary rehabilitation is comparatively simple for patients who have had a cemented-in prosthesis implanted. The joints can be loaded again very soon after the operation, but the use of crutches is still required for a reasonable period for safety reasons.
Bone cement has proven particularly useful because specific active substances, e.g. antibiotics, can be added to the powder component. The active substances are released locally after implant placement of the new joint, i.e. in the immediate vicinity of the new prosthesis and have been confirmed to reduce the danger of infection. The antibiotics act against bacteria precisely at the site where they are required in the open wound without subjecting the body in general to unnecessarily high antibiotic levels. This makes bone cement a modern drug delivery system that delivers the required drugs directly to the surgical site. The important factor is not how much active substance is in the cement matrix but how much of the active substance is actually released locally. Too much active substance in the bone cement would actually be detrimental, because the mechanical stability of the fixed prosthesis is weakened by a high proportion of active substance in the cement. The local active substance levels of industrially manufactured bone cements that are formed by the use of bone cements that contain active substances are approximate (assuming that there is no incompatibility) and are significantly below the clinical routine dosages for systemic single injections.
See also
Osteoplasty – use of bone cement to reduce pain
References
External links
Application note describing how to measure residual monomer in bone cement
A presentation on the rheology of bone cement
Orthopedic surgical procedures
Biomaterials
Adhesives
Medical treatments | Bone cement | [
"Physics",
"Biology"
] | 1,684 | [
"Biomaterials",
"Materials",
"Matter",
"Medical technology"
] |
6,217,045 | https://en.wikipedia.org/wiki/Kelvin%20equation | The Kelvin equation describes the change in vapour pressure due to a curved liquid–vapor interface, such as the surface of a droplet. The vapor pressure at a convex curved surface is higher than that at a flat surface. The Kelvin equation is dependent upon thermodynamic principles and does not allude to special properties of materials. It is also used for determination of pore size distribution of a porous medium using adsorption porosimetry. The equation is named in honor of William Thomson, also known as Lord Kelvin.
Formulation
The original form of the Kelvin equation, published in 1871, is:
where:
= vapor pressure at a curved interface of radius
= vapor pressure at flat interface () =
= surface tension
= density of vapor
= density of liquid
, = radii of curvature along the principal sections of the curved interface.
This may be written in the following form, known as the Ostwald–Freundlich equation:
where is the actual vapour pressure,
is the saturated vapour pressure when the surface is flat,
is the liquid/vapor surface tension, is the molar volume of the liquid, is the universal gas constant, is the radius of the droplet, and is temperature.
Equilibrium vapor pressure depends on droplet size.
If the curvature is convex, is positive, then
If the curvature is concave, is negative, then
As increases, decreases towards , and the droplets grow into bulk liquid.
If the vapour is cooled, then decreases, but so does . This means increases as the liquid is cooled. and may be treated as approximately fixed, which means that the critical radius must also decrease.
The further a vapour is supercooled, the smaller the critical radius becomes. Ultimately it can become as small as a few molecules, and the liquid undergoes homogeneous nucleation and growth.
The change in vapor pressure can be attributed to changes in the Laplace pressure. When the Laplace pressure rises in a droplet, the droplet tends to evaporate more easily.
When applying the Kelvin equation, two cases must be distinguished: A drop of liquid in its own vapor will result in a convex liquid surface, and a bubble of vapor in a liquid will result in a concave liquid surface.
History
The form of the Kelvin equation here is not the form in which it appeared in Lord Kelvin's article of 1871. The derivation of the form that appears in this article from Kelvin's original equation was presented by Robert von Helmholtz (son of German physicist Hermann von Helmholtz) in his dissertation of 1885. In 2020, researchers found that the equation was accurate down to the 1nm scale.
Derivation using the Gibbs free energy
The formal definition of the Gibbs free energy for a parcel of volume , pressure and temperature is given by:
where is the internal energy and is the entropy. The differential form of the Gibbs free energy can be given as
where is the chemical potential and is the number of moles. Suppose we have a substance which contains no impurities. Let's consider the formation of a single drop of with radius containing molecules from its pure vapor. The change in the Gibbs free energy due to this process is
where and are the Gibbs energies of the drop and vapor respectively. Suppose we have molecules in the vapor phase initially. After the formation of the drop, this number decreases to , where
Let and represent the Gibbs free energy of a molecule in the vapor and liquid phase respectively. The change in the Gibbs free energy is then:
where is the Gibbs free energy associated with an interface with radius of curvature and surface tension . The equation can be rearranged to give
Let and be the volume occupied by one molecule in the liquid phase and vapor phase respectively. If the drop is considered to be spherical, then
The number of molecules in the drop is then given by
The change in Gibbs energy is then
The differential form of the Gibbs free energy of one molecule at constant temperature and constant number of molecules can be given by:
If we assume that then
The vapor phase is also assumed to behave like an ideal gas, so
where is the Boltzmann constant. Thus, the change in the Gibbs free energy for one molecule is
where is the saturated vapor pressure of over a flat surface and is the actual vapor pressure over the liquid. Solving the integral, we have
The change in the Gibbs free energy following the formation of the drop is then
The derivative of this equation with respect to is
The maximum value occurs when the derivative equals zero. The radius corresponding to this value is:
Rearranging this equation gives the Ostwald–Freundlich form of the Kelvin equation:
Apparent paradox
An equation similar to that of Kelvin can be derived for the solubility of small particles or droplets in a liquid, by means of the connection between vapour pressure and solubility, thus the Kelvin equation also applies to solids, to slightly soluble liquids, and their solutions if the partial pressure is replaced by the solubility of the solid () (or a second liquid) at the given radius, , and by the solubility at a plane surface (). Hence small particles (like small droplets) are more soluble than larger ones. The equation would then be given by:
These results led to the problem of how new phases can ever arise from old ones. For example, if a container filled with water vapour at slightly below the saturation pressure is suddenly cooled, perhaps by adiabatic expansion, as in a cloud chamber, the vapour may become supersaturated with respect to liquid water. It is then in a metastable state, and we may expect condensation to take place. A reasonable molecular model of condensation would seem to be that two or three molecules of water vapour come together to form a tiny droplet, and that this nucleus of condensation then grows by accretion, as additional vapour molecules happen to hit it. The Kelvin equation, however, indicates that a tiny droplet like this nucleus, being only a few ångströms in diameter, would have a vapour pressure many times that of the bulk liquid. As far as tiny nuclei are concerned, the vapour would not be supersaturated at all. Such nuclei should immediately re-evaporate, and the emergence of a new phase at the equilibrium pressure, or even moderately above it should be impossible. Hence, the over-saturation must be several times higher than the normal saturation value for spontaneous nucleation to occur.
There are two ways of resolving this paradox. In the first place, we know the statistical basis of the second law of thermodynamics. In any system at equilibrium, there are always fluctuations around the equilibrium condition, and if the system contains few molecules, these fluctuations may be relatively large. There is always a chance that an appropriate fluctuation may lead to the formation of a nucleus of a new phase, even though the tiny nucleus could be called thermodynamically unstable. The chance of a fluctuation is e−ΔS/k, where ΔS is the deviation of the entropy from the equilibrium value.
It is unlikely, however, that new phases often arise by this fluctuation mechanism and the resultant spontaneous nucleation. Calculations show that the chance, e−ΔS/k, is usually too small. It is more likely that tiny dust particles act as nuclei in supersaturated vapours or solutions. In the cloud chamber, it is the clusters of ions caused by a passing high-energy particle that acts as nucleation centers. Actually, vapours seem to be much less finicky than solutions about the sort of nuclei required. This is because a liquid will condense on almost any surface, but crystallization requires the presence of crystal faces of the proper kind.
For a sessile drop residing on a solid surface, the Kelvin equation is modified near the contact line, due to intermolecular interactions between the liquid drop and the solid surface. This extended Kelvin equation is given by
where is the disjoining pressure that accounts for the intermolecular interactions between the sessile drop and the solid and is the Laplace pressure, accounting for the curvature-induced pressure inside the liquid drop. When the interactions are attractive in nature, the disjoining pressure, is negative. Near the contact line, the disjoining pressure dominates over the Laplace pressure, implying that the solubility, is less than . This implies that a new phase can spontaneously grow on a solid surface, even under saturation conditions.
See also
Condensation
Gibbs–Thomson equation
Ostwald–Freundlich equation
References
Further reading
W. J. Moore, Physical Chemistry, 4th ed., Prentice Hall, Englewood Cliffs, N. J., (1962) p. 734–736.
S. J. Gregg and K. S. W. Sing, Adsorption, Surface Area and Porosity, 2nd edition, Academic Press, New York, (1982) p. 121.
Arthur W. Adamson and Alice P. Gast, Physical Chemistry of Surfaces, 6th edition, Wiley-Blackwell (1997) p. 54.
Butt, Hans-Jürgen, Karlheinz Graf, and Michael Kappl. "The Kelvin Equation". Physics and Chemistry of Interfaces. Weinheim: Wiley-VCH, 2006. 16–19. Print.
Anton A. Valeev,"Simple Kelvin Equation Applicable in the Critical Point Vicinity",European Journal of Natural History, (2014), Issue 5, p. 13-14.
Surface science
Physical chemistry
Equation
Thought experiments in physics | Kelvin equation | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,965 | [
"Applied and interdisciplinary physics",
"Surface science",
"Condensed matter physics",
"nan",
"Physical chemistry"
] |
6,217,058 | https://en.wikipedia.org/wiki/Tetrafluorohydrazine | Tetrafluorohydrazine or perfluorohydrazine, , is a colourless, nonflammable, reactive inorganic gas. It is a fluorinated analog of hydrazine.
Synthesis
Tetrafluorohydrazine was originally prepared from nitrogen trifluoride using a copper as a fluorine atom acceptor:
A number of F-atom acceptors can be used, including carbon, other metals, and nitric oxide. These reactions exploit the relatively weak N-F bond in NF3.
Properties
Tetrafluorohydrazine is in equilibrium with its radical monomer nitrogen difluoride.
At room temperature is mostly associated with only 0.7% in the form of at 5mm Hg pressure. When the temperature rises to 225 °C, it mostly dissociates with 99% in the form of .
The energy needed to break the N−N bond in is 20.8 kcal/mol, with an entropy change of 38.6 eu. For comparison, the dissociation energy of the N−N bond is in , in , and in . The enthalpy of formation of (ΔfH°) is 34.421 kJ/mol.
Uses
Tetrafluorohydrazine is used in organic synthesis and some experimental rocket propellant formulations. It adds across double bonds to give vicinal di(difluoroamine)s. In chemical syntheses, as a precursor or a catalyst. It was considered for use as a high-energy liquid oxidizer in some never-flown rocket propellant formulas in 1959.
Safety
Tetrafluorohydrazine is a highly hazardous chemical that explodes in the presence of organic materials.
It is a toxic chemical which irritates skin, eyes and lungs. It is a neurotoxin and may cause methemoglobinemia. It may be fatal if inhaled or absorbed through skin. Vapors may be irritating and corrosive. It is a strong oxidizing agent. Contact with this chemical may cause burns and severe injury. Fire produces irritating, corrosive and toxic gases. Vapors from liquefied gas are initially heavier than air and spread across the ground.
Tetrafluorohydrazine explodes or ignites on contact with reducing agents at room temperature, including hydrogen, hydrocarbons, alcohols, thiols, amines, ammonia, hydrazines, dicyanogen, nitroalkanes, alkylberylliums, silanes, boranes or powdered metals. Prolonged exposure of the container of tetrafluorohydrazine to high heat may cause it to rupture violently and rocket. Tetrafluorohydrazine itself can explode at high temperatures or with shock or blast when under pressure. When heated to decomposition in air, it emits highly toxic fumes of fluorine and oxides of nitrogen.
There is a fatal case in which during opening of valves to check the pressure, the cylinder exploded, killing one man and injuring another.
References
Nitrogen fluorides
Rocket oxidizers
Hydrazines
Gases
Neurotoxins | Tetrafluorohydrazine | [
"Physics",
"Chemistry"
] | 656 | [
"Matter",
"Statistical mechanics",
"Phases of matter",
"Functional groups",
"Oxidizing agents",
"Rocket oxidizers",
"Hydrazines",
"Neurochemistry",
"Neurotoxins",
"Gases"
] |
6,218,947 | https://en.wikipedia.org/wiki/P-factor | Pfactor, also known as asymmetric blade effect and asymmetric disc effect, is an aerodynamic phenomenon experienced by a moving propeller, wherein the propeller's center of thrust moves off-center when the aircraft is at a high angle of attack. This shift in the location of the center of thrust will exert a yawing moment on the aircraft, causing it to yaw slightly to one side. A rudder input is required to counteract the yawing tendency.
Causes
When a propeller aircraft is flying at cruise speed in level flight, the propeller disc is perpendicular to the relative airflow through the propeller. Each of the propeller blades contacts the air at the same angle and speed, and thus the thrust produced is evenly distributed across the propeller.
However, at lower speeds, the aircraft will typically be in a nose-high attitude, with the propeller disc rotated slightly toward the horizontal. This has two effects. Firstly, propeller blades will be more forward when in the down position, and more backwards when in the up position. The propeller blade moving down and forward (for clockwise rotation, from the one o'clock to the six o'clock position when viewed from the cockpit) will have a greater forward speed. This will increase the airspeed of the blade, so the down-going blade will produce more thrust. The propeller blade moving up and back (from the seven o'clock to the 12 o'clock position) will have a decreased forward speed, therefore a lower airspeed than the down-going blade and lower thrust. This asymmetry displaces the center of thrust of the propeller disc towards the blade with increased thrust.
Secondly, the angle of attack of the down-going blade will increase, and the angle of attack of the up-going blade will decrease, because of the tilt of the propeller disc. The greater angle of attack of the down-going blade will produce more thrust.
Note that the increased forward speed of the down-going blade actually reduces its angle of attack, but this is overcome by the increase in angle of attack caused by the tilt of the propeller disc. Overall, the down-going blade has a greater airspeed and a greater angle of attack.
P-factor is greatest at high angles of attack and high power, for example during take-off or in slow flight.
Effects
Single engine propeller aircraft
If using a clockwise turning propeller (as viewed by the pilot) the aircraft has a tendency to yaw to the left when climbing and right when descending. This must be countered with opposite rudder. The clockwise-turning propeller is by far the most common. The yaw is noticeable when adding power, though it has additional causes including the spiral slipstream effect. In a fixed-wing aircraft, there is usually no way to adjust the angle of attack of the individual blades of the propellers, therefore the pilot must contend with P-factor and use the rudder to counteract the shift of thrust. When the airplane is descending, these forces are reversed. The descending right side of the prop is now moving slightly rearward with less angle of attack and the ascending left side of the prop moves slightly forward with greater angle of attack. This asymmetric thrust causes the airplane to pull to the right and the pilot uses left rudder to compensate. The fact that the left-right pull tendency reverses when descending, shows that differences in angle of attack on the left and right sides of the prop overwhelm other effects like the spiral slipstream. Put differently, if the spiral slipstream were the dominant factor, the airplane would always pull to the left and would not pull right when descending.
Pilots anticipate the need for rudder when changing engine power or pitch angle (angle of attack), and compensate by applying left or right rudder as required.
Tail-wheel aircraft exhibit more P-factor during the ground-roll than aircraft with tricycle landing gear, because of the greater angle of the propeller disc to the vertical. P-factor is insignificant during the initial ground roll, but will give a pronounced nose-left tendency during the later stages of the ground roll as forward speed increases, particularly if the thrust axis is kept inclined to the flight path vector (e.g. tail-wheel in contact with runway). The effect is not so apparent during the landing, flare and rollout, given the relatively low power setting (propeller RPM). However, should the throttle be suddenly advanced with the tail-wheel in contact with the runway, then anticipation of this nose-left tendency is prudent.
Multi engine propeller aircraft
For multi-engine aircraft with counter-rotating propellers, the P-factors of both engines will cancel out. However, if both engines rotate in the same direction, or if one engine fails, P-factor will cause a yaw. As with single-engine aircraft, this effect is greatest in situations where the aircraft is at high power and has a high angle of attack (such as the climb). The engine with the down-moving blades towards the wingtip produces more yaw and roll than the other engine, because the moment (arm) of that engine's center of thrust about the aircraft center of gravity is greater. Thus, the engine with down-moving blades closer to the fuselage will be the "critical engine", because its failure and the associated reliance on the other engine will require a significantly larger rudder deflection by the pilot to maintain straight flight than if the other engine had failed. P-Factor therefore determines which engine is critical engine. For most aircraft (which have clockwise rotating propellers), the left engine is the critical engine. For aircraft with counter-rotating propellers (i.e. not rotating in the same direction) the P-factor moments are equal and both engines are considered equally critical.
With engines rotating in the same direction, P-factor will affect the minimum control speeds (VMC) of the aircraft in asymmetric powered flight. The published speeds are determined based on the failure of the critical engine. The actual minimum control speeds after failure of any other engine will be lower (safer).
Helicopters
P-factor is extremely significant for helicopters in forward flight, because the propeller disc is almost horizontal. The forward-going blade has a higher airspeed than the backward-going blade, so it produces more lift, known as dissymmetry of lift. Helicopters can control each blade's angle of attack independently (decreasing the angle of attack on the advancing blade, while increasing the angle of attack on the retreating blade) in order to keep the lift of the rotor disc balanced. If the blades of the rotor were unable to independently change their angle of attack, a helicopter with counterclockwise-rotating rotor blades would roll to the left when in forward flight, due to the increased lift on the side of the rotor disc with the advancing blade. Gyroscopic precession converts this into a backwards pitch known as "flap back".
The never-exceed speed (VNE) of a helicopter will be chosen in part to ensure that the backwards-moving blade does not stall.
See also
Blohm & Voss BV 141
Propeller walk
Dissymmetry of lift (in helicopters)
References
Aerodynamics
Aircraft manufacturing | P-factor | [
"Chemistry",
"Engineering"
] | 1,462 | [
"Aircraft manufacturing",
"Aerodynamics",
"Mechanical engineering by discipline",
"Aerospace engineering",
"Fluid dynamics"
] |
6,220,531 | https://en.wikipedia.org/wiki/Electronic%20packaging | Electronic packaging is the design and production of enclosures for electronic devices ranging from individual semiconductor devices up to complete systems such as a mainframe computer. Packaging of an electronic system must consider protection from mechanical damage, cooling, radio frequency noise emission and electrostatic discharge. Product safety standards may dictate particular features of a consumer product, for example, external case temperature or grounding of exposed metal parts. Prototypes and industrial equipment made in small quantities may use standardized commercially available enclosures such as card cages or prefabricated boxes. Mass-market consumer devices may have highly specialized packaging to increase consumer appeal. Electronic packaging is a major discipline within the field of mechanical engineering.
Design
Electronic packaging can be organized by levels:
Level 0 - "Chip", protecting a bare semiconductor die from contamination and damage.
Level 1 - Component, such as semiconductor package design and the packaging of other discrete components.
Level 2 - Etched wiring board (printed circuit board).
Level 3 - Assembly, one or more wiring boards and associated components.
Level 4 - Module, assemblies integrated in an overall enclosure.
Level 5 - System, a set of modules combined for some purpose.
The same electronic system may be packaged as a portable device or adapted for fixed mounting in an instrument rack or permanent installation. Packaging for aerospace, marine, or military systems imposes different types of design criteria.
Design and productisation of electronic packages is a multi-disciplinary field based on mechanical engineering principles such as dynamics, stress analysis, heat transfer and fluid mechanics, chemistry, materials science, process engineering, etc. High-reliability equipment often must survive drop tests, loose cargo vibration, secured cargo vibration, extreme temperatures, humidity, water immersion or spray, rain, sunlight (UV, IR and visible light), salt spray, explosive shock, and many more. These requirements extend beyond and interact with the electrical design.
An electronics assembly consists of component devices, circuit card assemblies (CCAs), connectors, cables and components such as transformers, power supplies, relays, switches, etc. that may not mount on the circuit card.
Many electrical products require the manufacturing of high-volume, low-cost parts such as enclosures or covers by techniques such as injection molding, die casting, investment casting, and so on. The design of these products depends on the production method and require careful consideration of dimensions and tolerances and tooling design. Some parts may be manufactured by specialized processes such as plaster- and sand-casting of metal enclosures.
In the design of electronic products, electronic packaging engineers perform analyses to estimate such things as maximum temperatures for components, structural resonant frequencies, and dynamic stresses and deflections under worst-case environments. Such knowledge is important to prevent immediate or premature electronic product failures.
Design considerations
A designer must balance many objectives and practical considerations when selecting packaging methods.
Hazards to be protected against: mechanical damage, exposure to weather and dirt, electromagnetic interference, etc.
Heat dissipation requirements
Tradeoffs between tooling capital cost and per-unit cost
Tradeoffs between time to first delivery and production rate
Availability and capability of suppliers
User interface design and convenience
Ease of access to internal parts when required for maintenance
Product safety, and compliance with regulatory standards
Aesthetics, and other marketing considerations
Service life and reliability
Packaging materials
Sheet metal
Punched and formed sheet metal is one of the oldest types of electronic packaging. It can be mechanically strong, provides electromagnetic shielding when the product requires that feature, and is easily made for prototypes and small production runs with little custom tooling expense.
Cast metal
Gasketed metal castings are sometimes used to package electronic equipment for exceptionally severe environments, such as in heavy industry, aboard ship, or deep under water. Aluminum die castings are more common than iron or steel sand castings.
Machined metal
Electronic packages are sometimes made by machining solid blocks of metal, usually aluminum, into complex shapes. They are fairly common in microwave assemblies for aerospace use, where precision transmission lines require complex metal shapes, in combination with hermetically sealed housings. Quantities tend to be small; sometimes only one unit of a custom design is required. Piece part costs are high, but there is little or no cost for custom tooling, and first-piece deliveries can take as little as half a day. The tool of choice is a numerically controlled vertical milling machine, with automatic translation of computer-aided design (CAD) files to toolpath command files.
Molded plastic
Molded plastic cases and structural parts can be made by a variety of methods, offering tradeoffs in piece part cost, tooling cost, mechanical and electrical properties, and ease of assembly. Examples are injection molding, transfer molding, vacuum forming, and die cutting. Pl can be post-processed to provide conductive surfaces.
Potting
Also called "encapsulation", potting consists of immersing the part or assembly in a liquid resin, then curing it. Another method puts the part or assembly in a mold, and potting compound is poured in it, and after curing, the mold is not removed, becoming part of the part or assembly. Potting can be done in a pre-molded potting shell, or directly in a mold. Today it is most widely used to protect semiconductor components from moisture and mechanical damage, and to serve as a mechanical structure holding the lead frame and the chip together. In earlier times it was often used to discourage reverse engineering of proprietary products built as printed circuit modules. It is also commonly used in high voltage products to allow live parts to be placed closer together (eliminating corona discharges due to the potting compound's high dielectric strength), so that the product can be smaller. This also excludes dirt and conductive contaminants (such as impure water) from sensitive areas. Another use is to protect deep-submergence items such as sonar transducers from collapsing under extreme pressure, by filling all voids. Potting can be rigid or soft. When void-free potting is required, it is common practice to place the product in a vacuum chamber while the resin is still liquid, hold a vacuum for several minutes to draw the air out of internal cavities and the resin itself, then release the vacuum. Atmospheric pressure collapses the voids and forces the liquid resin into all internal spaces. Vacuum potting works best with resins that cure by polymerization, rather than solvent evaporation.
Porosity sealing or impregnation
Porosity Sealing or Resin Impregnation is similar to potting, but doesn't use a shell or a mold. Parts are submerged in a polymerizable monomer or solvent-based low viscosity plastic solution. The pressure above the fluid is lowered to a full vacuum. After the vacuum is released, the fluid flows into the part. When the part is withdrawn from the resin bath, it is drained and/or cleaned and then cured. Curing can consist of polymerizing the internal resin or evaporating the solvent, which leaves an insulating dielectric material between different voltage components. Porosity sealing (Resin Impregnation) fills all interior spaces, and may or may not leave a thin coating on the surface, depending on the wash/rinse performance. The main application of vacuum impregnation porosity sealing is in boosting the dielectric strength of transformers, solenoids, lamination stacks or coils, and some high voltage components. It prevents ionization from forming between closely spaced live surfaces and initiating failure.
Liquid filling
Liquid filling is sometimes used as an alternative to potting or impregnation. It's usually a dielectric fluid, chosen for chemical compatibility with the other materials present. This method is used mostly in very large electrical equipment such as utility transformers, to increase breakdown voltage. It can also be used to improve heat transfer, especially if allowed to circulate by natural convection or forced convection through a heat exchanger. Liquid filling can be removed for repair much more easily than potting.
Conformal coating
Conformal coating is a thin insulating coating applied by various methods. It provides mechanical and chemical protection of delicate components. It's widely used on mass-produced items such as axial-lead resistors, and sometimes on printed circuit boards. It can be very economical, but somewhat difficult to achieve consistent process quality.
Glob-top
Glob-top is a variant of conformal coating used in chip-on-board assembly (COB). It consists of a drop of specially formulated epoxy or resin deposited over a semiconductor chip and its wire bonds, to provide mechanical support and exclude contaminants such as fingerprint residues which could disrupt circuit operation. It is most commonly used in electronic toys and low-end devices.
Chip on board
Surface-mounted LEDs are frequently sold in chip-on-board (COB) configurations. In these, the individual diodes are mounted in an array that allows the device to produce a greater amount of luminous flux with greater ability to dissipate the resulting heat in an overall smaller package than can be accomplished by mounting LEDs, even surface mount types, individually on a circuit board.
Hermetic metal/glass cases
Hermetic metal packaging began life in the vacuum tube industry, where a totally leak-proof housing was essential to operation. This industry developed the glass-seal electrical feedthrough, using alloys such as Kovar to match the coefficient of expansion of the sealing glass so as to minimize mechanical stress on the critical metal-glass bond as the tube warmed up. Some later tubes used metal cases and feedthroughs, and only the insulation around the individual feedthroughs used glass. Today, glass-seal packages are used mostly in critical components and assemblies for aerospace use, where leakage must be prevented even under extreme changes in temperature, pressure, and humidity.
Hermetic ceramic packages
Packages consisting of a lead frame embedded in a vitreous paste layer between flat ceramic top and bottom covers are more convenient than metal/glass packages for some products, but give equivalent performance. Examples are integrated circuit chips in ceramic Dual In-line Package form, or complex hybrid assemblies of chip components on a ceramic base plate. This type of packaging can also be divided into two main types: multilayer ceramic packages (like LTCC and HTCC) and pressed ceramic packages.
Printed circuit assemblies
Printed circuits are primarily a technology for connecting components together, but they also provide mechanical structure. In some products, such as computer accessory boards, they're all the structure there is. This makes them part of the universe of electronic packaging.
Reliability evaluation
A typical reliability qualification includes the following types of environmental stresses:
Burn-in
Temperature cycling
Thermal shock
Solderability
Autoclave
Visual inspection
Hermeticity/moisture resistance
Hygrothermal test
Hygrothermal test is performed in chambers with temperature and humidity. It is an environmental stress test used in evaluating product reliability. The typical hygrothermal test is 85˚C temperature and 85% relative humidity (abbr. 85˚C/85%RH). During the test, the sample is periodically taken out to test its mechanical or electrical properties. Some research works related to hygrothermal test can be seen in the references.
See also
Semiconductor package
Integrated circuit packaging
Packaging (disambiguation)
Packaging
References
Industrial design
Packaging
Packaging (microfabrication)
Chip carriers | Electronic packaging | [
"Materials_science",
"Engineering"
] | 2,328 | [
"Industrial design",
"Design engineering",
"Packaging (microfabrication)",
"Microtechnology",
"Design"
] |
6,221,608 | https://en.wikipedia.org/wiki/Hexaamminecobalt%28III%29%20chloride | Hexaamminecobalt(III) chloride is the chemical compound with the formula [Co(NH3)6]Cl3. It is the chloride salt of the coordination complex [Co(NH3)6]3+, which is considered an archetypal "Werner complex", named after the pioneer of coordination chemistry, Alfred Werner. The cation itself is a metal ammine complex with six ammonia ligands attached to the cobalt(III) ion.
Properties and structure
[Co(NH3)6]3+ is diamagnetic, with a low-spin 3d6 octahedral Co(III) center. The cation obeys the 18-electron rule and is considered to be a classic example of an exchange inert metal complex. As a manifestation of its inertness, [Co(NH3)6]Cl3 can be recrystallized unchanged from concentrated hydrochloric acid: the NH3 is so tightly bound to the Co(III) centers that it does not dissociate to allow its protonation. In contrast, labile metal ammine complexes, such as [Ni(NH3)6]Cl2, react rapidly with acids, reflecting the lability of the Ni(II)–NH3 bonds. Upon heating, hexamminecobalt(III) begins to lose some of its ammine ligands, eventually producing a stronger oxidant.
The chloride ions in [Co(NH3)6]Cl3 can be exchanged with a variety of other anions such as nitrate, bromide, iodide, sulfamate to afford the corresponding [Co(NH3)6]X3 derivative. Such salts are orange or bright yellow and display varying degrees of water solubility. The chloride ion can be also exchanged with more complex anions such as the hexathiocyanatochromate(III), yielding a pink compound with formula [Co(NH3)6] [Cr(SCN)6], or the ferricyanide ion.
Preparation
[Co(NH3)6]Cl3 is prepared by treating cobalt(II) chloride with ammonia and ammonium chloride followed by oxidation. Oxidants include hydrogen peroxide or oxygen in the presence of charcoal catalyst. This salt appears to have been first reported by Fremy.
The acetate salt can be prepared by aerobic oxidation of cobalt(II) acetate, ammonium acetate, and ammonia in methanol. The acetate salt is highly water-soluble to the level of 1.9 M (20 °C), versus 0.26 M for the trichloride.
Uses in the laboratory
[Co(NH3)6]3+ is a component of some structural biology methods (especially for DNA or RNA, where positive ions stabilize tertiary structure of the phosphate backbone), to help solve their structures by X-ray crystallography or by nuclear magnetic resonance. In the biological system, the counterions would more probably be Mg2+, but the heavy atoms of cobalt (or sometimes iridium, as in ) provide anomalous scattering to solve the phase problem and produce an electron-density map of the structure.
[Co(NH3)6]3+ is used to investigate DNA. The cation induces the transition of DNA structure from the classical B-form to the Z-form.
Related compounds
Tris(ethylenediamine)cobalt(III) chloride
References
Cobalt complexes
Cobalt(III) compounds
Inorganic compounds
Chlorides
Octahedral compounds
Ammine complexes | Hexaamminecobalt(III) chloride | [
"Chemistry"
] | 736 | [
"Chlorides",
"Inorganic compounds",
"Salts"
] |
4,756,896 | https://en.wikipedia.org/wiki/Scanning%20helium%20ion%20microscope | A scanning helium ion microscope (SHIM, HeIM or HIM) is an imaging technology based on a scanning helium ion beam. Similar to other focused ion beam techniques, it allows to combine milling and cutting of samples with their observation at sub-nanometer resolution.
In terms of imaging, SHIM has several advantages over the traditional scanning electron microscope (SEM). Owing to the very high source brightness, and the short De Broglie wavelength of the helium ions, which is inversely proportional to their momentum, it is possible to obtain qualitative data not achievable with conventional microscopes which use photons or electrons as the emitting source. As the helium ion beam interacts with the sample, it does not suffer from a large excitation volume, and hence provides sharp images with a large depth of field on a wide range of materials. Compared to a SEM, the secondary electron yield is quite high, allowing for imaging with currents as low as 1 femtoamp. The detectors provide information-rich images which offer topographic, material, crystallographic, and electrical properties of the sample. In contrast to other ion beams, there is no discernible sample damage due to relatively light mass of the helium ion. The drawback is the cost.
SHIMs have been commercially available since 2007, and a surface resolution of 0.24 nanometers has been demonstrated.
References
External links
Carl Zeiss SMT – Nano Technology Systems Division: ORION He-Ion microscope
Microscopy Today, Volume 14, Number 04, July 2006: An Introduction to the Helium Ion Microscope
How New Helium Ion Microscope Measures Up – ScienceDaily
Microscopes
Helium | Scanning helium ion microscope | [
"Chemistry",
"Technology",
"Engineering"
] | 337 | [
"Microscopes",
"Measuring instruments",
"Microscopy"
] |
4,759,143 | https://en.wikipedia.org/wiki/Quantum%20acoustics | In physics, quantum acoustics is the study of sound under conditions such that quantum mechanical effects are relevant. For most applications, classical mechanics are sufficient to accurately describe the physics of sound. However very high frequency sounds, or sounds made at very low temperatures may be subject to quantum effects.
Quantum acoustics can also refer to attempts within the scientific community to couple superconducting qubits to acoustic waves. One particularly successful method involves coupling a superconducting qubit with a Surface Acoustic Wave (SAW) Resonator and placing these components on different substrates to achieve a higher signal to noise ratio as well as controlling the coupling strength of the components. This allows quantum experiments to verify that the phonons within the SAW Resonator are in quantum fock states by using Quantum tomography. Similar attempts have been made by using bulk acoustic resonators. One consequence of these developments is that it is possible to explore the properties of atoms with a much larger size than found conventionally by modelling them using a superconducting qubit coupled with a SAW Resonator.
Most recently, quantum acoustics has been used as a term to describe the coherent state limit of lattice vibrations, in analogue to quantum optics.
See also
Superfluid
Phonon
References
External links
Handbook of Acoustics by Malcolm Crocker has a chapter on quantum acoustics.
Quantum Computer Music Foundations, Methods and Advanced Concepts by Eduardo Reck Miranda
Condensed matter physics | Quantum acoustics | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 293 | [
"Phases of matter",
"Condensed matter physics",
"Matter",
"Materials science"
] |
4,761,306 | https://en.wikipedia.org/wiki/Sulfur%20tetrafluoride | Sulfur tetrafluoride is a chemical compound with the formula SF4. It is a colorless corrosive gas that releases dangerous hydrogen fluoride gas upon exposure to water or moisture. Sulfur tetrafluoride is a useful reagent for the preparation of organofluorine compounds, some of which are important in the pharmaceutical and specialty chemical industries.
Structure
Sulfur in SF4 is in the +4 oxidation state, with one lone pair of electrons. The atoms in SF4 are arranged in a see-saw shape, with the sulfur atom at the center. One of the three equatorial positions is occupied by a nonbonding lone pair of electrons. Consequently, the molecule has two distinct types of F ligands, two axial and two equatorial. The relevant bond distances are = 164.3 pm and = 154.2 pm. It is typical for the axial ligands in hypervalent molecules to be bonded less strongly.
The 19F NMR spectrum of SF4 reveals only one signal, which indicates that the axial and equatorial F atom positions rapidly interconvert via pseudorotation.
Synthesis and manufacture
At the laboratory scale, sulfur tetrafluoride is prepared from elemental sulfur and cobaltic fluoride
S + 4CoF3 → SF4 + 4CoF2
SF4 is industrially produced by the reaction of SCl2 and NaF with acetonitrile as a catalyst
3 SCl2 + 4 NaF → SF4 + S2Cl2 + 4 NaCl
At higher temperatures (e.g. 225–450 °C), the solvent is superfluous. Moreover, sulfur dichloride may be replaced by elemental sulfur (S) and chlorine (Cl2).
A low-temperature (e.g. 20–86 °C) alternative to the chlorinative process above uses liquid bromine (Br2) as oxidant and solvent:
S(s) + 2 Br2(l; excess) + 4KF(s) → SF4↑ + 4 KBr(brom)
Use in synthesis of organofluorine compounds
In organic synthesis, SF4 is used to convert COH and C=O groups into CF and CF2 groups, respectively. The efficiency of these conversions are highly variable.
In the laboratory, the use of SF4 has been superseded by the safer and more easily handled diethylaminosulfur trifluoride, (C2H5)2NSF3, "DAST": This reagent is prepared from SF4:
Other reactions
Sulfur chloride pentafluoride (), a useful source of the SF5 group, is prepared from SF4.
Hydrolysis of SF4 gives sulfur dioxide:
SF4 + 2 H2O → SO2 + 4 HF
This reaction proceeds via the intermediacy of thionyl fluoride, which usually does not interfere with the use of SF4 as a reagent.
When amines are treated with SF4 and a base, aminosulfur difluorides result.
Toxicity
reacts inside the lungs with moisture, forming sulfur dioxide and hydrogen fluoride which forms highly toxic and corrosive hydrofluoric acid
References
Sulfur fluorides
Fluorinating agents
Hypervalent molecules
ja:フッ化硫黄#四フッ化硫黄 | Sulfur tetrafluoride | [
"Physics",
"Chemistry"
] | 685 | [
"Molecules",
"Fluorinating agents",
"Hypervalent molecules",
"Reagents for organic chemistry",
"Matter"
] |
19,625,172 | https://en.wikipedia.org/wiki/Ocean%20surface%20topography | Ocean surface topography or sea surface topography, also called ocean dynamic topography, are highs and lows on the ocean surface, similar to the hills and valleys of Earth's land surface depicted on a topographic map.
These variations are expressed in terms of average sea surface height (SSH) relative to Earth's geoid. The main purpose of measuring ocean surface topography is to understand the large-scale ocean circulation.
Time variations
Unaveraged or instantaneous sea surface height (SSH) is most obviously affected by the tidal forces of the Moon and by the seasonal cycle of the Sun acting on Earth. Over timescales longer than a year, the patterns in SSH can be influenced by ocean circulation. Typically, SSH anomalies resulting from these forces differ from the mean by less than ± at the global scale. Other influences include changing interannual patterns of temperature, salinity, waves, tides and winds. Ocean surface topography can be measured with high accuracy and precision at regional to global scale by satellite altimetry (e.g. TOPEX/Poseidon).
Slower and larger variations are due to changes in Earth's gravitational field (geoid) due to melting ice, rearrangement of continents, formation of sea mounts and other redistribution of rock. The combination of satellite gravimetry (e.g. GRACE and GRACE-FO) with altimetry can be used to determine sea level rise and properties such as ocean heat content.
Applications
Ocean surface topography is used to map ocean currents, which move around the ocean's "hills" and "valleys" in predictable ways. A clockwise sense of rotation is found around "hills" in the northern hemisphere and "valleys" in the southern hemisphere. This is because of the Coriolis effect. Conversely, a counterclockwise sense of rotation is found around "valleys" in the northern hemisphere and "hills" in the southern hemisphere.
Ocean surface topography is also used to understand how the ocean moves heat around the globe, a critical component of Earth's climate, and for monitoring changes in global sea level.
The collection of the data is useful for the long-term information about the ocean and its currents. According to NASA science this data can also be used to provide understanding of weather, climate, navigation, fisheries management, and offshore operations. Observations made about the data are used to study the oceans tides, circulation, and the amount of heat the ocean contains. These observations can help predict short and long term effects of the weather and the earth's climate over time.
Measurement
The sea surface height (SSH) is calculated through altimetry satellites using as a reference surface the ellipsoid, which determine the distance from the satellite to a target surface by measuring the satellite-to-surface round-trip time of a radar pulse. The satellites then measure the distance between their orbit altitude and the surface of the water. Due to the differing depths of the ocean, an approximation is made. This enables data to be taken precisely due to the uniform surface level. The satellite's altitude then has to be calculated with respect to the reference ellipsoid. It is calculated using the orbital parameters of the satellite and various positioning instruments. However, the ellipsoid is not an equipotential surface of the Earth's gravity field, so the measurements must be referenced to a surface that represents the water flow, in this case the geoid. The transformations between geometric heights (ellipsoid) and orthometric heights (geoid) are performed from a geoidal model. The sea surface height is then the difference between the satellite's altitude relative to the reference ellipsoid and the altimeter range. The satellite sends microwave pulses to the ocean surface. The travel time of the pulses ascending to the oceans surface and back provides data of the sea surface height.
In the image below you can see the measurement system using by the satellite Jason-1.
Satellite missions
Currently there are nine different satellites calculating the earth ocean topography, Cryosat-2, SARAL, Jason-3, Sentinel-3A and Sentinel-3B, CFOSat, HY-2B and HY-2C, and Sentinel-6 Michael Freilich (also called Jason-CS A). Jason-3 and Sentinel-6 Michael Freilich are currently both in space orbiting Earth in a tandem rotation. They are approximately 330 kilometers apart.
Ocean surface topography can be derived from ship-going measurements of temperature and salinity at depth. However, since 1992, a series of satellite altimetry missions, beginning with TOPEX/Poseidon and continued with Jason-1, Ocean Surface Topography Mission on the Jason-2 satellite, Jason-3 and now Sentinel-6 Michael Freilich have measured sea surface height directly. By combining these measurements with gravity measurements from NASA's Grace and ESA's GOCE missions, scientists can determine sea surface topography to within a few centimeters.
Jason-1 was launched by a Boeing Delta II rocket in California in 2001 and continued measurements initially collected by TOPEX/Poseidon satellite, which orbited from 1992 up until 2006. NASA and CNES, the French space agency, are joint partners in this mission.
The main objectives of the Jason satellites is to collect data on the average ocean circulation around the globe in order to better understand its interaction with the time varying components and the involved mechanisms for initializing ocean models. To monitor the low frequency ocean variability and observe the season cycles and inter-annual variations like El Niño and La Niña, the North Atlantic oscillation, the pacific decadal oscillation, and planetary waves crossing the oceans over a period of months, then they will be modeled over a long period of time due to the precise altimetric observations. It aims to contribute to observations of the mesoscale ocean variability, affecting the whole oceans. This activity is especially intense near western boundary currents. Also monitor the average sea level because it is a large indicator of global warming through the sea level data. Improvement of tide modeling by observing more long period components such as coastal interactions, internal waves, and tidal energy dissipation. Finally the satellite data will supply knowledge to support other types of marine meteorology which is the scientific study of the atmosphere.
Jason-2 was launched on June 20, 2008, by a Delta-2 rocket out of the California site in Vandenberg and terminated its mission on October 10, 2019. Jason-3 was launched on January 16, 2016 by a Falcon-9 SpaceX rocket from Vandenberg, as well as Sentinel-6 Michael Freilich, launched on November 21, 2020.
The long-term objectives of the Jason satellite series are to provide global descriptions of the seasonal and yearly changes of the circulation and heat storage in the ocean. This includes the study of short-term climatic changes such as El Nino, La Nina. The satellites detect global sea level mean and record the fluctuations. Also detecting the slow change of upper ocean circulation on decadal time scales, every ten years. Studying the transportation of heat and carbon in the ocean and examining the main components that fuel deep water tides. The satellites data collection also helps improve wind speed and height measurements in current time and for long-term studies. Lastly improving our knowledge about the marine geoid. The first seven months Jason-2 was put into use it was flown in extreme close proximity to Jason-1. Only being one minute apart from each other the satellites observed the same area of the ocean. The reason for the close proximity in observation was for cross-calibration. This was meant to calculate any bias in the two altimeters. This multiple month observation proved that there was no bias in the data and both collections of data were consistent.
A new satellite mission called the Surface Water Ocean Topography Mission has been proposed to make the first global survey of the topography of all of Earth's surface water—the ocean, lakes and rivers. This study is aimed to provide a comprehensive view of Earth's freshwater bodies from space and more much detailed measurements of the ocean surface than ever before.
See also
Dynamic topography
Eddy (fluid dynamics)
SARAL
Sea surface microlayer
References
External links
Ocean Surface Topography from Space
OSTM Instrument Description
Climatology
Physical oceanography
Geodesy
Vertical position
ru:Уровень моря#Высота морской поверхности | Ocean surface topography | [
"Physics",
"Mathematics"
] | 1,727 | [
"Vertical position",
"Applied and interdisciplinary physics",
"Physical quantities",
"Distance",
"Applied mathematics",
"Physical oceanography",
"Geodesy"
] |
19,625,403 | https://en.wikipedia.org/wiki/Residual%20property%20%28physics%29 | In thermodynamics a residual property is defined as the difference between a real fluid property and an ideal gas property, both considered at the same density, temperature, and composition, typically expressed as
where is some thermodynamic property at given temperature, volume and mole numbers, is value of the property for an ideal gas, and is the residual property. The reference state is typically incorporated into the ideal gas contribution to the value, as
where is the value of at the reference state (commonly pure, ideal gas species at 1 bar), and is the departure of the property for an ideal gas at from this reference state.
Residual properties should not be confused with excess properties, which are defined as the deviation of a thermodynamic property from some reference system, that is typically not an ideal gas system. Whereas excess properties and excess models (also known as activity coefficient models) typically concern themselves with strictly liquid-phase systems, such as smelts, polymer blends or electrolytes, residual properties are intimately linked to equations of state which are commonly used to model systems in which vapour-liquid equilibria are prevalent, or systems where both gases and liquids are of interest. For some applications, activity coefficient models and equations of state are combined in what are known as "- models" (read: Gamma-Phi) referring to the symbols commonly used to denote activity coefficients and fugacities.
Significance
In the development and implementation of Equations of State, the concept of residual properties is valuable, as it allows one to separate the behaviour of a fluid that stems from non-ideality from that stemming from the properties of an ideal gas. For example, the isochoric heat capacity is given by
Where the ideal gas heat capacity, , can be measured experimentally, by measuring the heat capacity at very low pressure. After measurement it is typically represented using a polynomial fit such as the Shomate equation. The residual heat capacity is given by
,
and the accuracy of a given equation of state in predicting or correlating the heat capacity can be assessed by regarding only the residual contribution, as the ideal contribution is independent of the equation of state.
In Equilibrium Calculations
In fluid phase equilibria (i.e. liquid-vapour or liquid-liquid equilibria), the notion of the fugacity coefficient is crucial, as it can be shown that the equilibrium condition for a system consisting of phases , , , ... the condition for chemical equilibrium is
for all species , where denotes the mole fraction of species in phase , and is the fugacity coefficient of species in phase . The fugacity coefficient, being defined by
is directly related to the residual chemical potential, as
,
thus, because , we can see that an accurate description of the residual Helmholtz energy, rather than the total Helmholtz energy, is the key to accurately computing the equilibrium state of a system.
Residual Entropy Scaling
The residual entropy of a fluid has some special significance. In 1976, Yasha Rosenfeld published a landmark paper, showing that the transport coefficients of pure liquids, when expressed as functions of the residual entropy, can be treated as monovariate functions, rather than as functions of two variables (i.e. temperature and pressure, or temperature and density). This discovery lead to the concept of residual entropy scaling, which has spurred a large amount of research, up until the modern day, in which various approaches for modelling transport coefficients as functions of the residual entropy have been explored. Residual entropy scaling is still very much an area of active research.
Dependence on variable set
While any real state variable , in a real state (), is independent of whether one evaluates or , one should be aware that the residual property is in general dependent on the variable set, i.e.
This arises from the fact that the real state is in general not a valid ideal gas state, such that the ideal part of the property will be different depending on variable set. Take for example the chemical potential of a pure fluid: In a state that does not satisfy the ideal gas law, but may be a real state for some real fluid. The ideal gas chemical potential computed as a function of temperature, pressure and mole number is
,
while computing it as a function of concentration (), we have
,
such that
,
where we have used , and denotes the compressibility factor. This leads to the result
.
Practical Calculation
In practice, the most significant residual property is the residual Helmholtz energy. The reason for this is that other residual properties can be computed from the residual Helmholtz energy as various derivatives (see: Maxwell relations). We note that
such thatfurther, because any fluid reduces to an ideal gas in the limit of infinite volume,
.
Thus, for any Equation of State that is explicit in pressure, such as the van der Waals Equation of State, we may compute
.
However, in modern approaches to developing Equations of State, such as SAFT, it is found that it can be simpler to develop the equation of state by directly developing an equation for , rather than developing an equation that is explicit in pressure.
Correlated terms
Departure function
References
J. M. Smith, H.C.Van Ness, M. M. Abbot Introduction to Chemical Engineering Thermodynamics 2000, McGraw-Hill 6TH edition
Robert Perry, Don W. Green Perry's Chemical Engineers' Handbook 2007 McGraw-Hill 8TH edition
Thermodynamic properties | Residual property (physics) | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,108 | [
"Thermodynamics stubs",
"Thermodynamic properties",
"Physical quantities",
"Quantity",
"Thermodynamics",
"Physical chemistry stubs"
] |
19,631,836 | https://en.wikipedia.org/wiki/Drag%20reducing%20agent | Drag-reducing agents (DRA) or drag-reducing polymers (DRP) are additives in pipelines that reduce turbulence in a pipe. Usually used in petroleum pipelines, they increase the pipeline capacity by reducing turbulency and increasing laminar flow.
Description
Drag reducing agents can be broadly classified under the following four categories – Polymers, Solid-particle suspensions, Biological additives, and Surfactants. These agents are made out of high molecular weight polymers or micellar systems. The polymers help with drag reduction by decreasing turbulence in the oil lines. This allows for oil to be pumped through at lower pressures, saving energy and money. Although these drag reducing agents are mostly used in oil lines, there is research being done to see how helpful polymers could be in reducing drag in veins and arteries.
How it works
Using just a few parts per million of the drag reducer helps to reduce the turbulence inside the pipe. Because the oil pushes up against the inside wall of the pipe, the pipe pushes the oil back down causing a swirling of turbulence to occur which creates a drag force. When the polymer is added, it interacts with the oil and the wall to help reduce the contact of the oil with the wall.
Degradation can occur on the polymers during the flow. Because of the pressure and temperature on the polymers, it is easier to break them down. Because of this, the drag reducing agent is re-injected after points like pumps and turns, where the pressure and temperature can be extra high. To safeguard against degradation at high temperature, a different class of drag reducing agents are at times used, namely, surfactants. Surfactant is a very convenient contraction of the term Surface-active agent. It connotes an organic molecule or an unformulated compound having surface-active properties. All three classes of surfactants, namely, anionic, cationic and nonionic surfactants, have been successfully tried as drag-reducing agents.
Knowing what will create the ideal drag reducer is key in this process. Ideal molecules have a high molecular weight, shear degradation resistance, are quick to dissolve in whatever is in the pipe, and have low degradation in heat, light, chemicals, and biological areas.
With drag reduction, there are many factors which play a role in how well the drag is reduced. A main factor in this is temperature. With a higher temperature, the drag reducing agent is easier to degrade. At a low temperature the drag reducing agent will tend to cluster together. This problem can be solved easier than degradation though, by adding another chemical, such as aluminum to help lower the drag reducing agent's inter-molecular attraction.
Other factors are the pipe diameter, inside roughness, and pressure. Drag is higher in smaller diameter pipe. The rougher the inside surface of the pipe, the higher the drag, or friction, which is measured by the Reynold's number. Increasing the pressure will increase flow and reduce drag, but is limited by the maximum pressure rating of the pipe.
Areas of use
Drag reducing agents have been found useful in reducing turbulence in the shipbuilding industry, for fire-fighting operations, oil-well fracturing processes, in irrigation systems and in central heating devices. Drag reducers can work in a couple of different fields. The most popular are crude oil, refined products and non-potable water. Currently there are several studies with ongoing tests in rats looking to see if drag reducers can help with blood flow.
History
The earliest works that recorded a decrease in pressure drop during turbulent flow were undertaken in the thirties and concerned the transportation of paper pulp. This was, however, not explicitly referred to as a drag reduction phenomenon. B. A. Toms was the first to recognize the tremendous reduction in wall shear stress caused by the addition of small amount of linear macromolecules to a turbulent flowing fluid. This effect is known as the Toms effect. An extensive bibliography of the first 25 years of drag reduction by polymer additives literature identified over 270 references.
Drag reducers were introduced into the market in the early 1970s by Conoco Inc. (now known as LiquidPower Specialty Products Inc. (LSPI), a Berkshire Hathaway company). Its use has allowed pipeline systems to greatly increase in traditional capacity and extended the life of existing systems. The higher flow rates possible on long pipelines have also increased the potential for surge on older systems not previously designed for high velocities.
Both proprietary (such as Conoco T-83) and non-proprietary (such as poly-isobutylene) drag reduction additives have been evaluated by the U.S. Army Mobility Equipment Research and Development Center for enhancement of military petroleum pipeline systems.
References
Drag (physics)
Piping
Petroleum technology | Drag reducing agent | [
"Chemistry",
"Engineering"
] | 972 | [
"Drag (physics)",
"Building engineering",
"Chemical engineering",
"Petroleum engineering",
"Petroleum technology",
"Mechanical engineering",
"Piping",
"Fluid dynamics"
] |
23,649,157 | https://en.wikipedia.org/wiki/Cell%20casting | Cell casting is a method used for creating poly(methyl methacrylate) (PMMA) sheets. Liquid monomer is poured between two flat sheets of toughened glass sealed with a rubber gasket and heated for polymerization. Because the glass sheets may contain surface scratches or sag during the process, this traditional method has some disadvantages: among other problems, the PMMA sheets may contain variations in thickness and surface defects. For many applications it has since been replaced by other methods for making PMMA such as extrusion, which gives uniform surface features. However, for applications where strength is critical cell casting techniques are still employed in conjunction with stretching, which produces a stronger overall material.
"Cell Casting - A process in which a casting liquid is poured between two plates, usually glass, that have a gasket between them to form a cell to contain the casting liquid; then the resin solidifies, usually through polymerization or crosslinking." - A. Brent Strong
References
Monomers | Cell casting | [
"Chemistry",
"Materials_science"
] | 206 | [
"Monomers",
"Polymer chemistry"
] |
23,649,871 | https://en.wikipedia.org/wiki/Topological%20insulator | A topological insulator is a material whose interior behaves as an electrical insulator while its surface behaves as an electrical conductor, meaning that electrons can only move along the surface of the material.
A topological insulator is an insulator for the same reason a "trivial" (ordinary) insulator is: there exists an energy gap between the valence and conduction bands of the material. But in a topological insulator, these bands are, in an informal sense, "twisted", relative to a trivial insulator. The topological insulator cannot be continuously transformed into a trivial one without untwisting the bands, which closes the band gap and creates a conducting state. Thus, due to the continuity of the underlying field, the border of a topological insulator with a trivial insulator (including vacuum, which is topologically trivial) is forced to support a conducting state.
Since this results from a global property of the topological insulator's band structure, local (symmetry-preserving) perturbations cannot damage this surface state. This is unique to topological insulators: while ordinary insulators can also support conductive surface states, only the surface states of topological insulators have this robustness property.
This leads to a more formal definition of a topological insulator: an insulator which cannot be adiabatically transformed into an ordinary insulator without passing through an intermediate conducting state. In other words, topological insulators and trivial insulators are separate regions in the phase diagram, connected only by conducting phases. In this way, topological insulators provide an example of a state of matter not described by the Landau symmetry-breaking theory that defines ordinary states of matter.
The properties of topological insulators and their surface states are highly dependent on both the dimension of the material and its underlying symmetries, and can be classified using the so-called periodic table of topological insulators. Some combinations of dimension and symmetries forbid topological insulators completely. All topological insulators have at least U(1) symmetry from particle number conservation, and often have time-reversal symmetry from the absence of a magnetic field. In this way, topological insulators are an example of symmetry-protected topological order. So-called "topological invariants", taking values in or , allow classification of insulators as trivial or topological, and can be computed by various methods.
The surface states of topological insulators can have exotic properties. For example, in time-reversal symmetric 3D topological insulators, surface states have their spin locked at a right-angle to their momentum (spin-momentum locking). At a given energy the only other available electronic states have different spin, so "U"-turn scattering is strongly suppressed and conduction on the surface is highly metallic.
Despite their origin in quantum mechanical systems, analogues of topological insulators can also be found in classical media. There exist photonic, magnetic, and acoustic topological insulators, among others.
Prediction
The first models of 3D topological insulators were proposed by B. A. Volkov and O. A. Pankratov in 1985,
and subsequently by Pankratov, S. V. Pakhomov, and Volkov in 1987. Gapless 2D Dirac states were shown to exist at the band inversion contact in PbTe/SnTe and HgTe/CdTe heterostructures.
Existence of interface Dirac states in HgTe/CdTe was experimentally verified by Laurens W. Molenkamp's group in 2D topological insulators in 2007.
Later sets of theoretical models for the 2D topological insulator (also known as the quantum spin Hall insulators) were proposed by Charles L. Kane and Eugene J. Mele in 2005, and also by B. Andrei Bernevig and Shoucheng Zhang in 2006. The topological invariant was constructed and the importance of the time reversal symmetry was clarified in the work by Kane and Mele. Subsequently, Bernevig, Taylor L. Hughes and Zhang made a theoretical prediction that 2D topological insulator with one-dimensional (1D) helical edge states would be realized in quantum wells (very thin layers) of mercury telluride sandwiched between cadmium telluride. The transport due to 1D helical edge states was indeed observed in the experiments by Molenkamp's group in 2007.
Although the topological classification and the importance of time-reversal symmetry was pointed in the 2000s, all the necessary ingredients and physics of topological insulators were already understood in the works from the 1980s.
In 2007, it was predicted that 3D topological insulators might be found in binary compounds involving bismuth, and in particular "strong topological insulators" exist that cannot be reduced to multiple copies of the quantum spin Hall state.
Experimental realization
2D Topological insulators were first realized in system containing HgTe quantum wells sandwiched between cadmium telluride in 2007.
The first 3D topological insulator to be realized experimentally was Bi1 − x Sb x. Bismuth in its pure state, is a semimetal with a small electronic band gap. Using angle-resolved photoemission spectroscopy, and many other measurements, it was observed that Bi1 − xSbx alloy exhibits an odd surface state (SS) crossing between any pair of Kramers points and the bulk features massive Dirac fermions. Additionally, bulk Bi1 − xSbx has been predicted to have 3D Dirac particles. This prediction is of particular interest due to the observation of charge quantum Hall fractionalization in 2D graphene and pure bismuth.
Shortly thereafter symmetry-protected surface states were also observed in pure antimony, bismuth selenide, bismuth telluride and antimony telluride using angle-resolved photoemission spectroscopy (ARPES). and bismuth selenide. Many semiconductors within the large family of Heusler materials are now believed to exhibit topological surface states. In some of these materials, the Fermi level actually falls in either the conduction or valence bands due to naturally-occurring defects, and must be pushed into the bulk gap by doping or gating. The surface states of a 3D topological insulator is a new type of two-dimensional electron gas (2DEG) where the electron's spin is locked to its linear momentum.
Fully bulk-insulating or intrinsic 3D topological insulator states exist in Bi-based materials as demonstrated in surface transport measurements. In a new Bi based chalcogenide (Bi1.1Sb0.9Te2S) with slightly Sn - doping, exhibits an intrinsic semiconductor behavior with Fermi energy and Dirac point lie in the bulk gap and the surface states were probed by the charge transport experiments.
It was proposed in 2008 and 2009 that topological insulators are best understood not as surface conductors per se, but as bulk 3D magnetoelectrics with a quantized magnetoelectric effect. This can be revealed by placing topological insulators in magnetic field. The effect can be described in language similar to that of the hypothetical axion particle of particle physics. The effect was reported by researchers at Johns Hopkins University and Rutgers University using THz spectroscopy who showed that the Faraday rotation was quantized by the fine structure constant.
In 2012, topological Kondo insulators were identified in samarium hexaboride, which is a bulk insulator at low temperatures.
In 2014, it was shown that magnetic components, like the ones in spin-torque computer memory, can be manipulated by topological insulators. The effect is related to metal–insulator transitions (Bose–Hubbard model).
Floquet topological insulators
Topological insulators are challenging to synthesize, and limited in topological phases accessible with solid-state materials. This has motivated the search for topological phases on the systems that simulate the same principles underlying topological insulators. Discrete time quantum walks (DTQW) have been proposed for making Floquet topological insulators (FTI). This periodically driven system simulates an effective (Floquet) Hamiltonian that is topologically nontrivial. This system replicates the effective Hamiltonians from all universal classes of 1- to 3-D topological insulators. Interestingly, topological properties of Floquet topological insulators could be controlled via an external periodic drive rather than an external magnetic field. An atomic lattice empowered by distance selective Rydberg interaction could simulate different classes of FTI over a couple of hundred sites and steps in 1, 2 or 3 dimensions. The long-range interaction allows designing topologically ordered periodic boundary conditions, further enriching the realizable topological phases.
Properties and applications
Spin-momentum locking in the topological insulator allows symmetry-protected surface states to host Majorana particles if superconductivity is induced on the surface of 3D topological insulators via proximity effects. (Note that Majorana zero-mode can also appear without topological insulators.) The non-trivialness of topological insulators is encoded in the existence of a gas of helical Dirac fermions. Dirac particles which behave like massless relativistic fermions have been observed in 3D topological insulators. Note that the gapless surface states of topological insulators differ from those in the quantum Hall effect: the gapless surface states of topological insulators are symmetry-protected (i.e., not topological), while the gapless surface states in quantum Hall effect are topological (i.e., robust against any local perturbations that can break all the symmetries). The topological invariants cannot be measured using traditional transport methods, such as spin Hall conductance, and the transport is not quantized by the invariants. An experimental method to measure topological invariants was demonstrated which provide a measure of the topological order. (Note that the term topological order has also been used to describe the topological order with emergent gauge theory discovered in 1991.) More generally (in what is known as the ten-fold way) for each spatial dimensionality, each of the ten Altland—Zirnbauer symmetry classes of random Hamiltonians labelled by the type of discrete symmetry (time-reversal symmetry, particle-hole symmetry, and chiral symmetry) has a corresponding group of topological invariants (either , or trivial) as described by the periodic table of topological invariants.
The most promising applications of topological insulators are spintronic devices and dissipationless transistors for quantum computers based on the quantum Hall effect and quantum anomalous Hall effect. In addition, topological insulator materials have also found practical applications in advanced magnetoelectronic and optoelectronic devices.
Thermoelectrics
Some of the most well-known topological insulators are also thermoelectric materials, such as Bi2Te3 and its alloys with Bi2Se3 (n-type thermoelectrics) and Sb2Te3 (p-type thermoelectrics). High thermoelectric power conversion efficiency is realized in materials with low thermal conductivity, high electrical conductivity, and high Seebeck coefficient (i.e., the incremental change in voltage due to an incremental change in temperature). Topological insulators are often composed of heavy atoms, which tends to lower thermal conductivity and are therefore beneficial for thermoelectrics. A recent study also showed that good electrical characteristics (i.e., high electrical conductivity and Seebeck coefficient) can arise in topological insulators due to warping of the bulk band structure, which is driven by band inversion. Often, the electrical conductivity and Seebeck coefficient are conflicting properties of thermoelectrics and difficult to optimize simultaneously. Band warping, induced by band inversion in a topological insulator, can mediate the two properties by reducing the effective mass of electrons/holes and increasing the valley degeneracy (i.e., the number of electronic bands that are contributing to charge transport). As a result, topological insulators are generally interesting candidates for thermoelectric applications.
Synthesis
Topological insulators can be grown using different methods such as metal-organic chemical vapor deposition (MOCVD),
physical vapor deposition (PVD), solvothermal synthesis, sonochemical technique and molecular beam epitaxy
(MBE). MBE has so far been the most common experimental technique. The growth of thin film topological insulators is governed by weak van der Waals interactions. The weak interaction allows to exfoliate the thin film from bulk crystal with a clean and perfect surface. The van der Waals interactions in epitaxy also known as van der Waals epitaxy (VDWE), is a phenomenon governed by weak van der Waals interactions between layered materials of different or same elements in which the materials are stacked on top of each other. This approach allows the growth of layered topological insulators on other substrates for heterostructure and integrated circuits.
MBE growth of topological insulators
Molecular beam epitaxy (MBE) is an epitaxy method for the growth of a crystalline material on a crystalline substrate to form an ordered layer. MBE is performed in high vacuum or ultra-high vacuum, the elements are heated in different electron beam evaporators until they sublime. The gaseous elements then condense on the wafer where they react with each other to form single crystals.
MBE is an appropriate technique for the growth of high quality single-crystal films. In order to avoid a huge lattice mismatch and defects at the interface, the substrate and thin film are expected to have similar lattice constants. MBE has an advantage over other methods due to the fact that the synthesis is performed in high vacuum hence resulting in less contamination. Additionally, lattice defect is reduced due to the ability to influence the growth rate and the ratio of species of source materials present at the substrate interface. Furthermore, in MBE, samples can be grown layer by layer which results in flat surfaces with smooth interface for engineered heterostructures. Moreover, MBE synthesis technique benefits from the ease of moving a topological insulator sample from the growth chamber to a characterization chamber such as angle-resolved photoemission spectroscopy (ARPES) or scanning tunneling microscopy (STM) studies.
Due to the weak van der Waals bonding, which relaxes the lattice-matching condition, TI can be grown on a wide variety of substrates such as Si(111), , GaAs(111),
InP(111), CdS(0001) and .
PVD growth of topological insulators
The physical vapor deposition (PVD) technique does not suffer from the disadvantages of the exfoliation method and, at the same time, it is much simpler and cheaper than the fully controlled growth by molecular-beam epitaxy. The PVD method enables a reproducible synthesis of single crystals of various layered quasi-two-dimensional materials including topological insulators (i.e., , ). The resulted single crystals have a well-defined crystallographic orientation; their composition, thickness, size, and the surface density on the desired substrate can be controlled.
The thickness control is particularly important for 3D TIs in which the trivial (bulky) electronic channels usually dominate the transport properties and mask the response of the topological (surface) modes. By reducing the thickness, one lowers the contribution of trivial bulk channels into the total conduction, thus forcing the topological modes to carry the electric current.
Bismuth-based topological insulators
Thus far, the field of topological insulators has been focused on bismuth and antimony chalcogenide based materials such as , , or Bi1 − xSbx, Bi1.1Sb0.9Te2S. The choice of chalcogenides is related to the van der Waals relaxation of the lattice matching strength which restricts the number of materials and substrates. Bismuth chalcogenides have been studied extensively for TIs and their applications in thermoelectric materials. The van der Waals interaction in TIs exhibit important features due to low surface energy. For instance, the surface of is usually terminated by Te due to its low surface energy.
Bismuth chalcogenides have been successfully grown on different substrates. In particular, Si has been a good substrate for the successful growth of . However, the use of sapphire as substrate has not been so encouraging due to a large mismatch of about 15%. The selection of appropriate substrate can improve the overall properties of TI. The use of buffer layer can reduce the lattice match hence improving the electrical properties of TI. can be grown on top of various Bi2 − xInxSe3 buffers. Table 1 shows , , on different substrates and the resulting lattice mismatch. Generally, regardless of the substrate used, the resulting films have a textured surface that is characterized by pyramidal single-crystal domains with quintuple-layer steps. The size and relative proportion of these pyramidal domains vary with factors that include film thickness, lattice mismatch with the substrate and interfacial chemistry-dependent film nucleation. The synthesis of thin films have the stoichiometry problem due to the high vapor pressures of the elements. Thus, binary tetradymites are extrinsically doped as n-type (, ) or p-type ( ). Due to the weak van der Waals bonding, graphene is one of the preferred substrates for TI growth despite the large lattice mismatch.
Identification
The first step of topological insulators identification takes place right after synthesis, meaning without breaking the vacuum and moving the sample to an atmosphere. That could be done by using angle-resolved photoemission spectroscopy (ARPES) or scanning tunneling microscopy (STM) techniques. Further measurements includes structural and chemical probes such as X-ray diffraction and energy-dispersive spectroscopy but depending on the sample quality, the lack of sensitivity could remain. Transport measurements cannot uniquely pinpoint the topology by definition of the state.
Classification
Bloch's theorem allows a full characterization of the wave propagation properties of a material by assigning a matrix to each wave vector in the Brillouin zone.
Mathematically, this assignment creates a vector bundle. Different materials will have different wave propagation properties, and thus different vector bundles. If we consider all insulators (materials with a band gap), this creates a space of vector bundles. It is the topology of this space (modulo trivial bands) from which the "topology" in topological insulators arises.
Specifically, the number of connected components of the space indicates how many different "islands" of insulators exist amongst the metallic states. Insulators in the connected component containing the vacuum state are identified as "trivial", and all other insulators as "topological". The connected component in which an insulator lies can be identified with a number, referred to as a "topological invariant".
This space can be restricted under the presence of symmetries, changing the resulting topology. Although unitary symmetries are usually significant in quantum mechanics, they have no effect on the topology here. Instead, the three symmetries typically considered are time-reversal symmetry, particle-hole symmetry, and chiral symmetry (also called sublattice symmetry). Mathematically, these are represented as, respectively: an anti-unitary operator which commutes with the Hamiltonian; an anti-unitary operator which anti-commutes with the Hamiltonian; and a unitary operator which anti-commutes with the Hamiltonian. All combinations of the three together with each spatial dimension result in the so-called periodic table of topological insulators.
Future developments
The field of topological insulators still needs to be developed. The best bismuth chalcogenide topological insulators have about 10 meV bandgap variation due to the charge. Further development should focus on the examination of both: the presence of high-symmetry electronic bands and simply synthesized materials. One of the candidates is half-Heusler compounds. These crystal structures can consist of a large number of elements. Band structures and energy gaps are very sensitive to the valence configuration; because of the increased likelihood of intersite exchange and disorder, they are also very sensitive to specific crystalline configurations. A nontrivial band structure that exhibits band ordering analogous to that of the known 2D and 3D TI materials was predicted in a variety of 18-electron half-Heusler compounds using first-principles calculations. These materials have not yet shown any sign of intrinsic topological insulator behavior in actual experiments.
See also
Topological order
Topological quantum computer
Topological quantum field theory
Topological quantum number
Quantum Hall effect
Quantum spin Hall effect
Periodic table of topological invariants
Bismuth selenide
Photonic topological insulator
References
Further reading
Condensed matter physics
Semiconductors | Topological insulator | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 4,310 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Phases of matter",
"Materials science",
"Materials",
"Topology",
"Space",
"Condensed matter physics",
"Geometry",
"Electronic engineering",
"Spacetime",
"Solid state engineering",
"Matter"
] |
23,650,426 | https://en.wikipedia.org/wiki/Society%20of%20Glass%20Technology | The Society of Glass Technology (SGT) is an organisation for individuals and organizations with a professional interest in glass manufacture and usage. The Society is based in the United Kingdom, with its offices in Sheffield, South Yorkshire, England, but it has a worldwide membership.
The objects of the Society "are to encourage and advance the study of the history, art, science, design, manufacture, after treatment, distribution and end use of glass of any and every kind".
The Society was founded by W. E. S. Turner in 1916.
The Society is a founder member of the International Commission on Glass and the European Society of Glass Science and Technology.
Membership grades
Member individuals interested in the work of the Society
Fellow "individual members who have met prescribed requirements of education, achievement or service to the Industry and the Society. They may use the designatory letters FSGT".
Corporate Member Company or organization
Fellow (Emeritus)The awardee of Fellow Emeritus is a person who subsequent to their election to the Fellowship has rendered to the Society exceptional service.
Honorary Fellow An Honorary Fellow is a person who, because of reasons of conspicuous service rendered by him to the Society or who, by reasons of noteworthy or distinguished contributions to one or more of the branches of knowledge which comprise glass technology. There are a maximum of 12 Honorary Fellows. To mark the 100th year of the Society in 2016 three additional Centenary Honorary Fellowships were awarded to Dr Richard Hulme, Margaret West and Professor Steve Feller.
Current Publications
The publication of scientific and technical works is a major activity of the Society, and currently it publishes two journals. These were formed in 2006 when the journals of the Society of Glass Technology and the Deutsche Glastechnische Gesellschaft were combined to form the European Journal of Glass Science and Technology.
Glass Technology: European Journal of Glass Science and Technology Part A (fully peer reviewed plus a news section, 6 issues per year)
Physics and Chemistry of Glasses: European Journal of Glass Science and Technology Part B (fully peer reviewed, 6 issues per year).
The Society also publishes books and conference proceedings.
Former Publications
Journal of the Society of Glass Technology (1917–1959)
From 1960 this was split into two journals:
Glass Technology (1960–2005)
Physics and Chemistry of Glasses (1960–2005)
Awards
Sir Alastair Pilkington Award
The SGT-Sir Alastair Pilkington Award is designed to encourage and recognise excellent work in glass research or innovation achieved by someone who, like Sir Alastair, has come relatively recently into the field of glass studies. This Award is not restricted to hard science or engineering – it spans all dimensions of glass studies, creativity and research; glass art as well as glass science, conservation and museum studies as well as engineering, history and design as well as molecular dynamics.
Oldfield Award
The Oldfield Award is open to UK students and international students. There are cash prizes for 1st 2nd and 3rd. It is presented for research projects carried out by either undergraduate or taught masters students.
Paul Award
Named after Amalendu Paul (1937–1990), this prize is for the paper presented by a new researcher at our annual conference. PhD candidates can win £250 + free student SGT membership for the year for the best presentation (clarity, technical content) as voted by the Basic Science Committee.
References
Glass Technology
Glass engineering and science
Organisations based in Sheffield
Organizations established in 1916 | Society of Glass Technology | [
"Materials_science",
"Engineering"
] | 682 | [
"Glass engineering and science",
"Materials science"
] |
23,652,115 | https://en.wikipedia.org/wiki/C11H15NO2 | {{DISPLAYTITLE:C11H15NO2}}
The molecular formula C11H15NO2 (molar mass : 193.24 g/mol, exact mass : 193.110279) may refer to:
1,3-Benzodioxolylbutanamine
Butamben
m-Cumenyl methylcarbamate
3,4-Ethylidenedioxyamphetamine
Isoprocarb
Lobivine
MDMA (3,4-MDMA, 3,4-Methylenedioxymethamphetamine)
Methedrone
3-Methoxymethcathinone
1-Methylamino-1-(3,4-methylenedioxyphenyl)propane
2,3-Methylenedioxymethamphetamine (2,3-MDMA)
3,4-Methylenedioxyphentermine
2-Methyl-MDA
5-Methyl-MDA
6-Methyl-MDA
Tolibut | C11H15NO2 | [
"Chemistry"
] | 209 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
3,530,418 | https://en.wikipedia.org/wiki/Society%20for%20Popular%20Astronomy | The Society for Popular Astronomy (SPA) is a national astronomical society based in the United Kingdom for beginners to amateur astronomy.
History and overview
It was founded in 1953 as the Junior Astronomical Society by experienced amateur astronomers including Patrick Moore, Ernest Noon and Eric Turner to encourage beginners to the science and to promote astronomy among the general public.
The term "Junior" was used to denote its role compared to the long-established society the British Astronomical Association. The name was changed in 1994 to make clear that the society was for beginners of all ages, and for those who wanted a less technical approach. In 2007 a new Young Stargazer category of membership was introduced to cater specifically for members aged under 16.
The society's first patron was Dr J G Porter whose BBC radio broadcasts about astronomy preceded television's long-running series The Sky At Night. Since his death, the role has been held by certain Astronomers Royal. The society's president, who serves a two-year term, is usually a senior professional astronomer.
The SPA aims to show that astronomy can be fun and to promote an interest in observing the sky among its members. The SPA has a number of observing sections whose work members can participate in. These cover observations of aurorae, comets, deep sky, the Moon, meteors, occultations, the planets, the Sun and variable stars.
The society publishes a magazine, Popular Astronomy, which from 2011 is being published every two months. Previously it was a quarterly publication, but it now includes material that was carried in now-defunct separate regular printed News Circulars. A members-only email newsletter provides immediate news of major discoveries as well as information and reminders about society meetings and events.
The SPA offers advisory services on choosing a telescope, electronic imaging, photography and the GCSE astronomy examination.
Observing sections
See also
List of astronomical societies
Notes
References
External links
SPA website
SPA discussion forums
1953 establishments in the United Kingdom
Amateur astronomy organizations
British astronomy organisations
Scientific organizations established in 1953
Astronomy societies | Society for Popular Astronomy | [
"Astronomy"
] | 405 | [
"Astronomy societies",
"British astronomy organisations",
"Amateur astronomy organizations",
"Astronomy organizations"
] |
3,530,536 | https://en.wikipedia.org/wiki/H-1NF | The H-1NF (or H-1 Australian Plasma Fusion Research Facility) was a research institute of the H-1 heliac, a large stellarator device located in the ANU Research School of Physics at Canberra, Australia. It was established when the H-1 heliac was promoted to a national facility in 1996, adopting H-1NF as its facility name ("H-1" from the stellarator and "NF" for National Facility). In 2022 the H-1 heliac was disassembled before being shipped to its new home in China.
H-1 heliac stellarator
The H-1 flexible Heliac is a three field-period helical axis stellarator. Optimisation of the H-1 power supplies for low current ripple allows precise control of the ratio of secondary (helical, vertical) coil to primary (poloidal, toroidal) coil currents, resulting in a finely tunable magnetic geometry. Slight variation in the current ratio between shots (plasma discharges) in a sequence corresponds to a high resolution parameter scan through magnetic configurations (i.e.: rotational transform profile, magnetic well). The programmable control system allows for repetition rates of around 30 shots per hour, limited by data acquisition time and magnet cooling time.
Stated objectives
Provide a high-temperature plasma national facility of international standing on a scale appropriate to Australia's research budget.
Provide a focus for national and international collaborative research, make significant contributions to the global fusion research effort and increase the Australian presence in the field of plasma fusion power into the next century.
Gain a detailed understanding of the basic physics of hot plasma which is magnetically confined in the helical-axis stellarator configuration.
Develop advanced plasma measurement systems ("diagnostics"), integrating real-time processing and multi-dimensional visualization of data.
References
External links
H-1NF Homepage
H-1NF description and images, Rockwell Automation
Image of H-1NF
Stellarators
Nuclear research institutes
Plasma physics facilities | H-1NF | [
"Physics",
"Engineering"
] | 421 | [
"Nuclear research institutes",
"Nuclear organizations",
"Plasma physics facilities",
"Plasma physics"
] |
3,531,066 | https://en.wikipedia.org/wiki/Discontinuous%20linear%20map | In mathematics, linear maps form an important class of "simple" functions which preserve the algebraic structure of linear spaces and are often used as approximations to more general functions (see linear approximation). If the spaces involved are also topological spaces (that is, topological vector spaces), then it makes sense to ask whether all linear maps are continuous. It turns out that for maps defined on infinite-dimensional topological vector spaces (e.g., infinite-dimensional normed spaces), the answer is generally no: there exist discontinuous linear maps. If the domain of definition is complete, it is trickier; such maps can be proven to exist, but the proof relies on the axiom of choice and does not provide an explicit example.
A linear map from a finite-dimensional space is always continuous
Let X and Y be two normed spaces and a linear map from X to Y. If X is finite-dimensional, choose a basis in X which may be taken to be unit vectors. Then,
and so by the triangle inequality,
Letting
and using the fact that
for some C>0 which follows from the fact that any two norms on a finite-dimensional space are equivalent, one finds
Thus, is a bounded linear operator and so is continuous. In fact, to see this, simply note that f is linear,
and therefore for some universal constant K. Thus for any
we can choose so that ( and
are the normed balls around and ), which gives continuity.
If X is infinite-dimensional, this proof will fail as there is no guarantee that the supremum M exists. If Y is the zero space {0}, the only map between X and Y is the zero map which is trivially continuous. In all other cases, when X is infinite-dimensional and Y is not the zero space, one can find a discontinuous map from X to Y.
A concrete example
Examples of discontinuous linear maps are easy to construct in spaces that are not complete; on any Cauchy sequence of linearly independent vectors which does not have a limit, there is a linear operator such that the quantities grow without bound. In a sense, the linear operators are not continuous because the space has "holes".
For example, consider the space of real-valued smooth functions on the interval [0, 1] with the uniform norm, that is,
The derivative-at-a-point map, given by
defined on and with real values, is linear, but not continuous. Indeed, consider the sequence
for . This sequence converges uniformly to the constantly zero function, but
as instead of , as would hold for a continuous map. Note that is real-valued, and so is actually a linear functional on (an element of the algebraic dual space ). The linear map which assigns to each function its derivative is similarly discontinuous. Note that although the derivative operator is not continuous, it is closed.
The fact that the domain is not complete here is important: discontinuous operators on complete spaces require a little more work.
A nonconstructive example
An algebraic basis for the real numbers as a vector space over the rationals is known as a Hamel basis (note that some authors use this term in a broader sense to mean an algebraic basis of any vector space). Note that any two noncommensurable numbers, say 1 and , are linearly independent. One may find a Hamel basis containing them, and define a map so that f acts as the identity on the rest of the Hamel basis, and extend to all of by linearity. Let {rn}n be any sequence of rationals which converges to . Then limn f(rn) = π, but By construction, f is linear over (not over ), but not continuous. Note that f is also not measurable; an additive real function is linear if and only if it is measurable, so for every such function there is a Vitali set. The construction of f relies on the axiom of choice.
This example can be extended into a general theorem about the existence of discontinuous linear maps on any infinite-dimensional normed space (as long as the codomain is not trivial).
General existence theorem
Discontinuous linear maps can be proven to exist more generally, even if the space is complete. Let X and Y be normed spaces over the field K where or Assume that X is infinite-dimensional and Y is not the zero space. We will find a discontinuous linear map f from X to K, which will imply the existence of a discontinuous linear map g from X to Y given by the formula where is an arbitrary nonzero vector in Y.
If X is infinite-dimensional, to show the existence of a linear functional which is not continuous then amounts to constructing f which is not bounded. For that, consider a sequence (en)n () of linearly independent vectors in X, which we normalize. Then, we define
for each Complete this sequence of linearly independent vectors to a vector space basis of X by defining T at the other vectors in the basis to be zero. T so defined will extend uniquely to a linear map on X, and since it is clearly not bounded, it is not continuous.
Notice that by using the fact that any set of linearly independent vectors can be completed to a basis, we implicitly used the axiom of choice, which was not needed for the concrete example in the previous section.
Role of the axiom of choice
As noted above, the axiom of choice (AC) is used in the general existence theorem of discontinuous linear maps. In fact, there are no constructive examples of discontinuous linear maps with complete domain (for example, Banach spaces). In analysis as it is usually practiced by working mathematicians, the axiom of choice is always employed (it is an axiom of ZFC set theory); thus, to the analyst, all infinite-dimensional topological vector spaces admit discontinuous linear maps.
On the other hand, in 1970 Robert M. Solovay exhibited a model of set theory in which every set of reals is measurable. This implies that there are no discontinuous linear real functions. Clearly AC does not hold in the model.
Solovay's result shows that it is not necessary to assume that all infinite-dimensional vector spaces admit discontinuous linear maps, and there are schools of analysis which adopt a more constructivist viewpoint. For example, H. G. Garnir, in searching for so-called "dream spaces" (topological vector spaces on which every linear map into a normed space is continuous), was led to adopt ZF + DC + BP (dependent choice is a weakened form and the Baire property is a negation of strong AC) as his axioms to prove the Garnir–Wright closed graph theorem which states, among other things, that any linear map from an F-space to a TVS is continuous. Going to the extreme of constructivism, there is Ceitin's theorem, which states that every function is continuous (this is to be understood in the terminology of constructivism, according to which only representable functions are considered to be functions). Such stances are held by only a small minority of working mathematicians.
The upshot is that the existence of discontinuous linear maps depends on AC; it is consistent with set theory without AC that there are no discontinuous linear maps on complete spaces. In particular, no concrete construction such as the derivative can succeed in defining a discontinuous linear map everywhere on a complete space.
Closed operators
Many naturally occurring linear discontinuous operators are closed, a class of operators which share some of the features of continuous operators. It makes sense to ask which linear operators on a given space are closed. The closed graph theorem asserts that an everywhere-defined closed operator on a complete domain is continuous, so to obtain a discontinuous closed operator, one must permit operators which are not defined everywhere.
To be more concrete, let be a map from to with domain written We don't lose much if we replace X by the closure of That is, in studying operators that are not everywhere-defined, one may restrict one's attention to densely defined operators without loss of generality.
If the graph of is closed in we call T closed. Otherwise, consider its closure in If is itself the graph of some operator is called closable, and is called the closure of
So the natural question to ask about linear operators that are not everywhere-defined is whether they are closable. The answer is, "not necessarily"; indeed, every infinite-dimensional normed space admits linear operators that are not closable. As in the case of discontinuous operators considered above, the proof requires the axiom of choice and so is in general nonconstructive, though again, if X is not complete, there are constructible examples.
In fact, there is even an example of a linear operator whose graph has closure all of Such an operator is not closable. Let X be the space of polynomial functions from [0,1] to and Y the space of polynomial functions from [2,3] to . They are subspaces of C([0,1]) and C([2,3]) respectively, and so normed spaces. Define an operator T which takes the polynomial function x ↦ p(x) on [0,1] to the same function on [2,3]. As a consequence of the Stone–Weierstrass theorem, the graph of this operator is dense in so this provides a sort of maximally discontinuous linear map (confer nowhere continuous function). Note that X is not complete here, as must be the case when there is such a constructible map.
Impact for dual spaces
The dual space of a topological vector space is the collection of continuous linear maps from the space into the underlying field. Thus the failure of some linear maps to be continuous for infinite-dimensional normed spaces implies that for these spaces, one needs to distinguish the algebraic dual space from the continuous dual space which is then a proper subset. It illustrates the fact that an extra dose of caution is needed in doing analysis on infinite-dimensional spaces as compared to finite-dimensional ones.
Beyond normed spaces
The argument for the existence of discontinuous linear maps on normed spaces can be generalized to all metrizable topological vector spaces, especially to all Fréchet spaces, but there exist infinite-dimensional locally convex topological vector spaces such that every functional is continuous. On the other hand, the Hahn–Banach theorem, which applies to all locally convex spaces, guarantees the existence of many continuous linear functionals, and so a large dual space. In fact, to every convex set, the Minkowski gauge associates a continuous linear functional. The upshot is that spaces with fewer convex sets have fewer functionals, and in the worst-case scenario, a space may have no functionals at all other than the zero functional. This is the case for the spaces with from which it follows that these spaces are nonconvex. Note that here is indicated the Lebesgue measure on the real line. There are other spaces with which do have nontrivial dual spaces.
Another such example is the space of real-valued measurable functions on the unit interval with quasinorm given by
This non-locally convex space has a trivial dual space.
One can consider even more general spaces. For example, the existence of a homomorphism between complete separable metric groups can also be shown nonconstructively.
See also
References
Constantin Costara, Dumitru Popa, Exercises in Functional Analysis, Springer, 2003. .
Schechter, Eric, Handbook of Analysis and its Foundations, Academic Press, 1997. .
Functional analysis
Axiom of choice
Functions and mappings | Discontinuous linear map | [
"Mathematics"
] | 2,458 | [
"Mathematical analysis",
"Functions and mappings",
"Functional analysis",
"Mathematical objects",
"Mathematical axioms",
"Axiom of choice",
"Axioms of set theory",
"Mathematical relations"
] |
3,531,857 | https://en.wikipedia.org/wiki/Arsenic%20pentoxide | Arsenic pentoxide is the inorganic compound with the formula As2O5. This glassy, white, deliquescent solid is relatively unstable, consistent with the rarity of the As(V) oxidation state. More common, and far more important commercially, is arsenic(III) oxide (As2O3). All inorganic arsenic compounds are highly toxic and thus find only limited commercial applications.
Structure
The structure consists of tetrahedral {AsO4} and octahedral {AsO6} centers linked by sharing corners. The structure differs from that of the corresponding phosphorus(V) oxide; as a result, although there is still a solid solution with that oxide, it only progresses to the equimolar point, at which point phosphorus has substituted for arsenic in all of its tetrahedral sites. Likewise, arsenic pentoxide can also dissolve up to an equimolar amount of antimony pentoxide, as antimony substitutes for arsenic only in its octahedral sites.
Synthesis
Historical
Pierre Macquer found a crystallizable salt which he called ‘sel neutre arsenical’. This salt was the residue obtained after distilling nitric acid from a mixture of potassium nitrate and arsenic trioxide. Previously Paracelsus heated a mixture of arsenic trioxide and potassium nitrate. He applied the term ‘arsenicum fixum’ to the product. A. Libavius called the same product ‘butyrum arsenici’ (butter of arsenic), although this term was actually used for arsenic trichloride. The products that Paracelsus and Libavius found were all impure alkali arsenates. Scheele prepared a number of arsenates by the action of arsenic acid on the alkalies. One of the arsenates that he prepared, was arsenic pentoxide. The water in the alkalies evaporated at 180˚C, and the arsenic pentoxide was stable below 400˚C .
Modern methods
Arsenic pentoxide can be crystallized by heating As2O3 under oxygen. This reaction is reversible:
As2O5 As2O3 + O2
Strong oxidizing agents such as ozone, hydrogen peroxide, and nitric acid convert arsenic trioxide to the pentoxide.
Arsenic acid can be generated via routine processing of arsenic compounds including the oxidation of arsenic and arsenic-containing minerals in air. Illustrative is the roasting of orpiment, a typical arsenic sulfide ore:
2 As2S3 + 11 O2 → 2 As2O5 + 6 SO2
Safety
Like all inorganic arsenic compounds, the pentoxide is highly toxic. Its reduced derivative arsenite, which is an As(III) compound, is even more toxic since it has a high affinity for thiol groups of cysteine residues in proteins.
It is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities.
References
External links
NIOSH Pocket Guide to Chemical Hazards
IARC Monograph – Arsenic and Arsenic Compounds
NTP Report on Carcinogens – Inorganic Arsenic Compounds
ESIS: European chemical Substances Information System
Arsenic(V) compounds
Oxides
IARC Group 1 carcinogens | Arsenic pentoxide | [
"Chemistry"
] | 701 | [
"Oxides",
"Salts"
] |
3,532,367 | https://en.wikipedia.org/wiki/Phycobiliprotein | Phycobiliproteins are water-soluble proteins present in cyanobacteria and certain algae (rhodophytes, cryptomonads, glaucocystophytes). They capture light energy, which is then passed on to chlorophylls during photosynthesis. Phycobiliproteins are formed of a complex between proteins and covalently bound phycobilins that act as chromophores (the light-capturing part). They are most important constituents of the phycobilisomes.
Major phycobiliproteins
Characteristics
Phycobiliproteins demonstrate superior fluorescent properties compared to small organic fluorophores, especially when high sensitivity or multicolor detection required :
Broad and high absorption of light suits many light sources
Very intense emission of light: 10-20 times brighter than small organic fluorophores
Relative large Stokes shift gives low background, and allows multicolor detections.
Excitation and emission spectra do not overlap compared to conventional organic dyes.
Can be used in tandem (simultaneous use by FRET) with conventional chromophores (i.e. PE and FITC, or APC and SR101 with the same light source).
Longer fluorescence retention period.
High water solubility
Applications
Phycobiliproteins allow very high detection sensitivity, and can be used in various fluorescence based techniques fluorimetric microplate assays , FISH and multicolor detection.
They are under development for use in artificial photosynthesis, limited by the relatively low conversion efficiency of 4-5%.
References
Photosynthetic pigments
Cyanobacteria proteins
Algae
Bacterial proteins | Phycobiliprotein | [
"Chemistry",
"Biology"
] | 356 | [
"Algae",
"Photosynthetic pigments",
"Photosynthesis"
] |
3,532,723 | https://en.wikipedia.org/wiki/Lipoteichoic%20acid | Lipoteichoic acid (LTA) is a major constituent of the cell wall of gram-positive bacteria. These organisms have an inner (or cytoplasmic) membrane and, external to it, a thick (up to 80 nanometer) peptidoglycan layer. The structure of LTA varies between the different species of gram-positive bacteria and may contain long chains of ribitol or glycerol phosphate. LTA is anchored to the cell membrane via a diacylglycerol. It acts as regulator of autolytic wall enzymes (muramidases). It has antigenic properties being able to stimulate specific immune response.
LTA may bind to target cells non-specifically through membrane phospholipids, or specifically to CD14 and to Toll-like receptors. Binding to TLR-2 has shown to induce NF-κB expression(a central transcription factor), elevating expression of both pro- and anti-apoptotic genes. Its activation also induces mitogen-activated protein kinases (MAPK) activation along with phosphoinositide 3-kinase activation.
Studies
LTA's molecular structure has been found to have the strongest hydrophobic bonds of an entire bacteria.
Said et al. showed that LTA causes an IL-10-dependent inhibition of CD4 T-cell expansion and function by up-regulating PD-1 levels on monocytes which leads to IL-10 production by monocytes after binding of PD-1 by PD-L.
Lipoteichoic acid (LTA) from Gram-positive bacteria exerts different immune effects depending on the bacterial source from which it is isolated. For example, LTA from Enterococcus faecalis is a virulence factor positively correlating to inflammatory damage to teeth during acute infection. On the other hand, a study reported Lacticaseibacillus rhamnosus GG LTA (LGG-LTA) oral administration reduces UVB-induced immunosuppression and skin tumor development in mice. In animal studies, specific bacterial LTA has been correlated with induction of arthritis, nephritis, uveitis, encephalomyelitis, meningeal inflammation, and periodontal lesions, and also triggered cascades resulting in septic shock and multiorgan failure.
References
External links
Organic acids
Cell biology | Lipoteichoic acid | [
"Chemistry",
"Biology"
] | 497 | [
"Organic acids",
"Acids",
"Cell biology",
"Organic compounds"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.