id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
10,349,343 | https://en.wikipedia.org/wiki/Kantorovich%20theorem | The Kantorovich theorem, or Newton–Kantorovich theorem, is a mathematical statement on the semi-local convergence of Newton's method. It was first stated by Leonid Kantorovich in 1948. It is similar to the form of the Banach fixed-point theorem, although it states existence and uniqueness of a zero rather than a fixed point.
Newton's method constructs a sequence of points that under certain conditions will converge to a solution of an equation or a vector solution of a system of equation . The Kantorovich theorem gives conditions on the initial point of this sequence. If those conditions are satisfied then a solution exists close to the initial point and the sequence converges to that point.
Assumptions
Let be an open subset and a differentiable function with a Jacobian that is locally Lipschitz continuous (for instance if is twice differentiable). That is, it is assumed that for any there is an open subset such that and there exists a constant such that for any
holds. The norm on the left is the operator norm. In other words, for any vector the inequality
must hold.
Now choose any initial point . Assume that is invertible and construct the Newton step
The next assumption is that not only the next point but the entire ball is contained inside the set . Let be the Lipschitz constant for the Jacobian over this ball (assuming it exists).
As a last preparation, construct recursively, as long as it is possible, the sequences , , according to
Statement
Now if then
a solution of exists inside the closed ball and
the Newton iteration starting in converges to with at least linear order of convergence.
A statement that is more precise but slightly more difficult to prove uses the roots of the quadratic polynomial
,
and their ratio
Then
a solution exists inside the closed ball
it is unique inside the bigger ball
and the convergence to the solution of is dominated by the convergence of the Newton iteration of the quadratic polynomial towards its smallest root , if , then
The quadratic convergence is obtained from the error estimate
Corollary
In 1986, Yamamoto proved that the error evaluations of the Newton method such as Doring (1969), Ostrowski (1971, 1973), Gragg-Tapia (1974), Potra-Ptak (1980), Miel (1981), Potra (1984), can be derived from the Kantorovich theorem.
Generalizations
There is a q-analog for the Kantorovich theorem. For other generalizations/variations, see Ortega & Rheinboldt (1970).
Applications
Oishi and Tanabe claimed that the Kantorovich theorem can be applied to obtain reliable solutions of linear programming.
References
Further reading
John H. Hubbard and Barbara Burke Hubbard: Vector Calculus, Linear Algebra, and Differential Forms: A Unified Approach, Matrix Editions, (preview of 3. edition and sample material including Kant.-thm.)
Functional analysis
Numerical analysis
Theorems in analysis
Optimization in vector spaces
Optimization algorithms and methods | Kantorovich theorem | [
"Mathematics"
] | 608 | [
"Theorems in mathematical analysis",
"Functions and mappings",
"Mathematical analysis",
"Functional analysis",
"Mathematical objects",
"Computational mathematics",
"Mathematical relations",
"Numerical analysis",
"Mathematical problems",
"Mathematical theorems",
"Approximations"
] |
10,354,629 | https://en.wikipedia.org/wiki/ProSavin | ProSavin is an experimental drug believed to be of use in the treatment of Parkinson's disease. It is administered to the striatum in the brain, inducing production of dopamine.
It is manufactured by Oxford BioMedica. Results from a Phase I/II clinical trial were published in the Lancet and showed safety, but little efficacy. ProSavin was superseded by AXO-Lenti-PD (OXB-102), an optimized version of the drug.
Mechanism of action
Prosavin uses Oxford BioMedica's Lentivector delivery system to transfer three genes, aromatic amino acid dopa decarboxylase, tyrosine hydroxylase and GTP-cyclohydrolase 1, to the striatum in the brain, reprogramming transduced cells to secrete dopamine.
See also
TroVax
References
Drugs acting on the nervous system
Virotherapy | ProSavin | [
"Chemistry"
] | 195 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
10,355,069 | https://en.wikipedia.org/wiki/Acyclic%20space | In mathematics, an acyclic space is a nonempty topological space X in which cycles are always boundaries, in the sense of homology theory. This implies that integral homology groups in all dimensions of X are isomorphic to the corresponding homology groups of a point.
In other words, using the idea of reduced homology,
It is common to consider such a space as a nonempty space without "holes"; for example, a circle or a sphere is not acyclic but a disc
or a ball is acyclic. This condition however is weaker than asking that every closed loop in the space would bound a disc in the space, all we ask is that any closed loop—and higher dimensional analogue thereof—would bound something like a "two-dimensional surface."
The condition of acyclicity on a space X implies, for example, for nice spaces—say, simplicial complexes—that any continuous map of X to the circle or to the higher spheres is null-homotopic.
If a space X is contractible, then it is also acyclic, by the homotopy invariance of homology. The converse is not true, in general. Nevertheless, if X is an acyclic CW complex, and if the fundamental group of X is trivial, then X is a contractible space, as follows from the Whitehead theorem and the Hurewicz theorem.
Examples
Acyclic spaces occur in topology, where they can be used to construct other, more interesting topological spaces.
For instance, if one removes a single point from a manifold M which is a homology sphere, one gets such a space. The homotopy groups of an acyclic space X do not vanish in general, because the fundamental group need not be trivial. For example, the punctured Poincaré homology sphere is an acyclic, 3-dimensional manifold which is not contractible.
This gives a repertoire of examples, since the first homology group is the abelianization of the fundamental group. With every perfect group G one can associate a (canonical, terminal) acyclic space, whose fundamental group is a central extension of the given group G.
The homotopy groups of these associated acyclic spaces are closely related to Quillen's plus construction on the classifying space BG.
Acyclic groups
An acyclic group is a group G whose classifying space BG is acyclic; in other words, all its (reduced) homology groups vanish, i.e., , for all . Every acyclic group is thus a perfect group, meaning its first homology group vanishes: , and in fact, a superperfect group, meaning the first two homology groups vanish: . The converse is not true: the binary icosahedral group is superperfect (hence perfect) but not acyclic.
See also
Aspherical space
References
External links
Algebraic topology
Homology theory
Homotopy theory | Acyclic space | [
"Mathematics"
] | 629 | [
"Fields of abstract algebra",
"Topology",
"Algebraic topology"
] |
10,356,246 | https://en.wikipedia.org/wiki/Standard%20atomic%20weight | The standard atomic weight of a chemical element (symbol Ar°(E) for element "E") is the weighted arithmetic mean of the relative isotopic masses of all isotopes of that element weighted by each isotope's abundance on Earth. For example, isotope 63Cu (Ar = 62.929) constitutes 69% of the copper on Earth, the rest being 65Cu (Ar = 64.927), so
Because relative isotopic masses are dimensionless quantities, this weighted mean is also dimensionless. It can be converted into a measure of mass (with dimension ) by multiplying it with the dalton, also known as the atomic mass constant.
Among various variants of the notion of atomic weight (Ar, also known as relative atomic mass) used by scientists, the standard atomic weight is the most common and practical. The standard atomic weight of each chemical element is determined and published by the Commission on Isotopic Abundances and Atomic Weights (CIAAW) of the International Union of Pure and Applied Chemistry (IUPAC) based on natural, stable, terrestrial sources of the element. The definition specifies the use of samples from many representative sources from the Earth, so that the value can widely be used as the atomic weight for substances as they are encountered in reality—for example, in pharmaceuticals and scientific research. Non-standardized atomic weights of an element are specific to sources and samples, such as the atomic weight of carbon in a particular bone from a particular archaeological site. Standard atomic weight averages such values to the range of atomic weights that a chemist might expect to derive from many random samples from Earth. This range is the rationale for the interval notation given for some standard atomic weight values.
Of the 118 known chemical elements, 80 have stable isotopes and 84 have this Earth-environment based value. Typically, such a value is, for example helium: . The "(2)" indicates the uncertainty in the last digit shown, to read . IUPAC also publishes abridged values, rounded to five significant figures. For helium, .
For fourteen elements the samples diverge on this value, because their sample sources have had a different decay history. For example, thallium (Tl) in sedimentary rocks has a different isotopic composition than in igneous rocks and volcanic gases. For these elements, the standard atomic weight is noted as an interval: . With such an interval, for less demanding situations, IUPAC also publishes a conventional value. For thallium, .
Definition
The standard atomic weight is a special value of the relative atomic mass. It is defined as the "recommended values" of relative atomic masses of sources in the local environment of the Earth's crust and atmosphere as determined by the IUPAC Commission on Atomic Weights and Isotopic Abundances (CIAAW). In general, values from different sources are subject to natural variation due to a different radioactive history of sources. Thus, standard atomic weights are an expectation range of atomic weights from a range of samples or sources. By limiting the sources to terrestrial origin only, the CIAAW-determined values have less variance, and are a more precise value for relative atomic masses (atomic weights) actually found and used in worldly materials.
The CIAAW-published values are used and sometimes lawfully required in mass calculations. The values have an uncertainty (noted in brackets), or are an expectation interval (see example in illustration immediately above). This uncertainty reflects natural variability in isotopic distribution for an element, rather than uncertainty in measurement (which is much smaller with quality instruments).
Although there is an attempt to cover the range of variability on Earth with standard atomic weight figures, there are known cases of mineral samples which contain elements with atomic weights that are outliers from the standard atomic weight range.
For synthetic elements the isotope formed depends on the means of synthesis, so the concept of natural isotope abundance has no meaning. Therefore, for synthetic elements the total nucleon count of the most stable isotope (i.e., the isotope with the longest half-life) is listed in brackets, in place of the standard atomic weight. When the term "atomic weight" is used in chemistry, usually it is the more specific standard atomic weight that is implied. It is standard atomic weights that are used in periodic tables and many standard references in ordinary terrestrial chemistry.
Lithium represents a unique case where the natural abundances of the isotopes have in some cases been found to have been perturbed by human isotopic separation activities to the point of affecting the uncertainty in its standard atomic weight, even in samples obtained from natural sources, such as rivers.
Terrestrial definition
An example of why "conventional terrestrial sources" must be specified in giving standard atomic weight values is the element argon. Between locations in the Solar System, the atomic weight of argon varies as much as 10%, due to extreme variance in isotopic composition. Where the major source of argon is the decay of in rocks, will be the dominant isotope. Such locations include the planets Mercury and Mars, and the moon Titan. On Earth, the ratios of the three isotopes 36Ar : 38Ar : 40Ar are approximately 5 : 1 : 1600, giving terrestrial argon a standard atomic weight of 39.948(1).
However, such is not the case in the rest of the universe. Argon produced directly, by stellar nucleosynthesis, is dominated by the alpha-process nuclide . Correspondingly, solar argon contains 84.6% (according to solar wind measurements), and the ratio of the three isotopes 36Ar : 38Ar : 40Ar in the atmospheres of the outer planets is 8400 : 1600 : 1. The atomic weight of argon in the Sun and most of the universe, therefore, would be only approximately 36.3.
Causes of uncertainty on Earth
Famously, the published atomic weight value comes with an uncertainty. This uncertainty (and related: precision) follows from its definition, the source being "terrestrial and stable". Systematic causes for uncertainty are:
Measurement limits. As always, the physical measurement is never finite. There is always more detail to be found and read. This applies to every single, pure isotope found. For example, today the mass of the main natural fluorine isotope (fluorine-19) can be measured to the accuracy of eleven decimal places: . But a still more precise measurement system could become available, producing more decimals.
Imperfect mixtures of isotopes. In the samples taken and measured the mix (relative abundance) of those isotopes may vary. For example, copper. While in general its two isotopes make out 69.15% and 30.85% each of all copper found, the natural sample being measured can have had an incomplete 'stirring' and so the percentages are different. The precision is improved by measuring more samples of course, but there remains this cause of uncertainty. (Example: lead samples vary so much, it can not be noted more precise than four figures: )
Earthly sources with a different history. A source is the greater area being researched, for example 'ocean water' or 'volcanic rock' (as opposed to a 'sample': the single heap of material being investigated). It appears that some elements have a different isotopic mix per source. For example, thallium in igneous rock has more lighter isotopes, while in sedimentary rock it has more heavy isotopes. There is no Earthly mean number. These elements show the interval notation: Ar°(Tl) = [, ]. For practical reasons, a simplified 'conventional' number is published too (for Tl: 204.38).
These three uncertainties are accumulative. The published value is a result of all these.
Determination of relative atomic mass
Modern relative atomic masses (a term specific to a given element sample) are calculated from measured values of atomic mass (for each nuclide) and isotopic composition of a sample. Highly accurate atomic masses are available for virtually all non-radioactive nuclides, but isotopic compositions are both harder to measure to high precision and more subject to variation between samples. For this reason, the relative atomic masses of the 22 mononuclidic elements (which are the same as the isotopic masses for each of the single naturally occurring nuclides of these elements) are known to especially high accuracy.
The calculation is exemplified for silicon, whose relative atomic mass is especially important in metrology. Silicon exists in nature as a mixture of three isotopes: Si, Si and Si. The atomic masses of these nuclides are known to a precision of one part in 14 billion for Si and about one part in one billion for the others. However the range of natural abundance for the isotopes is such that the standard abundance can only be given to about ±0.001% (see table).
The calculation is
A(Si) = (27.97693 × 0.922297) + (28.97649 × 0.046832) + (29.97377 × 0.030872) = 28.0854
The estimation of the uncertainty is complicated, especially as the sample distribution is not necessarily symmetrical: the IUPAC standard relative atomic masses are quoted with estimated symmetrical uncertainties, and the value for silicon is 28.0855(3). The relative standard uncertainty in this value is 1 or 10 ppm. To further reflect this natural variability, in 2010, IUPAC made the decision to list the relative atomic masses of 10 elements as an interval rather than a fixed number.
Naming controversy
The use of the name "atomic weight" has attracted a great deal of controversy among scientists. Objectors to the name usually prefer the term "relative atomic mass" (not to be confused with atomic mass). The basic objection is that atomic weight is not a weight, that is the force exerted on an object in a gravitational field, measured in units of force such as the newton or poundal.
In reply, supporters of the term "atomic weight" point out (among other arguments) that:
the name has been in continuous use for the same quantity since it was first conceptualized in 1808;
for most of that time, atomic weights really were measured by weighing (that is by gravimetric analysis) and the name of a physical quantity should not change simply because the method of its determination has changed;
the term "relative atomic mass" should be reserved for the mass of a specific nuclide (or isotope), while "atomic weight" be used for the weighted mean of the atomic masses over all the atoms in the sample;
it is not uncommon to have misleading names of physical quantities which are retained for historical reasons, such as
electromotive force, which is not a force
resolving power, which is not a power quantity
molar concentration, which is not a molar quantity (a quantity expressed per unit amount of substance).
It could be added that atomic weight is often not truly "atomic" either, as it does not correspond to the property of any individual atom. The same argument could be made against "relative atomic mass" used in this sense.
Published values
IUPAC publishes one formal value for each stable chemical element, called the standard atomic weight. Any updates are published biannually (in uneven years). In 2015, the atomic weight of ytterbium was updated. Per 2017, 14 atomic weights were changed, including argon changing from single number to interval value.
The value published can have an uncertainty, like for neon: , or can be an interval, like for boron: [10.806, 10.821].
Next to these 84 values, IUPAC also publishes abridged values (up to five digits per number only), and for the twelve interval values, conventional values (single number values).
Symbol Ar is a relative atomic mass, for example from a specific sample. To be specific, the standard atomic weight can be noted as , where (E) is the element symbol.
Abridged atomic weight
The abridged atomic weight, also published by CIAAW, is derived from the standard atomic weight, reducing the numbers to five digits (five significant figures). The name does not say 'rounded'.
Interval borders are rounded downwards for the first (low most) border, and upwards for the upward (upmost) border. This way, the more precise original interval is fully covered.
Examples:
Calcium: →
Helium: →
Hydrogen: →
Conventional atomic weight
Fourteen chemical elements – hydrogen, lithium, boron, carbon, nitrogen, oxygen, magnesium, silicon, sulfur, chlorine, argon, bromine, thallium, and lead – have a standard atomic weight that is defined not as a single number, but as an interval. For example, hydrogen has . This notation states that the various sources on Earth have substantially different isotopic constitutions, and that the uncertainties in all of them are just covered by the two numbers. For these elements, there is not an 'Earth average' constitution, and the 'right' value is not its middle (which would be 1.007975 for hydrogen, with an uncertainty of (±0.000135) that would make it just cover the interval). However, for situations where a less precise value is acceptable, for example in trade, CIAAW has published a single-number conventional atomic weight. For hydrogen, .
A formal short atomic weight
By using the abridged value, and the conventional value for the fourteen interval values, a short IUPAC-defined value (5 digits plus uncertainty) can be given for all stable elements. In many situations, and in periodic tables, this may be sufficiently detailed.
List of atomic weights
In the periodic table
See also
International Union of Pure and Applied Chemistry (IUPAC)
Commission on Isotopic Abundances and Atomic Weights (CIAAW)
References
External links
IUPAC Commission on Isotopic Abundances and Atomic Weights
Atomic Weights of the Elements 2011
Amount of substance
Chemical properties
Stoichiometry
Periodic table | Standard atomic weight | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,895 | [
"Periodic table",
"Scalar physical quantities",
"Chemical reaction engineering",
"Stoichiometry",
"Physical quantities",
"Quantity",
"Chemical quantities",
"Amount of substance",
"nan",
"Wikipedia categories named after physical quantities"
] |
10,356,765 | https://en.wikipedia.org/wiki/Responsiveness | Responsiveness as a concept of computer science refers to the specific ability of a system or functional unit to complete assigned tasks within a given time. For example, it would refer to the ability of an artificial intelligence system to understand and carry out its tasks in a timely fashion.
In the Reactive principle, Responsiveness is one of the fundamental criteria along with resilience, elasticity and message driven.
It is one of the criteria under the principle of robustness (from a v principle). The other three are observability, recoverability, and task conformance.
Vs performance
Software which lacks a decent process management can have poor responsiveness even on a fast machine. On the other hand, even slow hardware can run responsive software.
It is much more important that a system actually spend the available resources in the best way possible. For instance, it makes sense to let the mouse driver run at a very high priority to provide fluid mouse interactions. For long-term operations, such as copying, downloading or transforming big files the most important factor is to provide good user-feedback and not the performance of the operation since it can quite well run in the background, using only spare processor time.
Delays
Long delays can be a major cause of user frustration, or can lead the user to believe the system is not functioning, or that a command or input gesture has been ignored. Responsiveness is therefore considered an essential usability issue for human-computer-interaction (HCI). The rationale behind the responsiveness principle is that the system should deliver results of an operation to users in a timely and organized manner.
The frustration threshold can be quite different, depending on the situation and the fact that the user interface depends on local or remote systems to show a visible response.
There are at least three user tolerance thresholds (i.e.):
0.1 seconds
under 0.1 seconds the response is perceived as instantaneous (high user satisfaction);
1.0 seconds
between 0.1 seconds and 1.0 second a slight delay is perceived, which is regarded as annoying in a local system but tolerated in a web interface that depends on a remote system for the response; this kind of delay usually does not interrupt user's flow of thoughts;
10 seconds
between 1 second and 10 seconds, user's flow of thoughts is interrupted (user productivity is severely impacted) but user is able to keep his/her attention focused on the task being performed;
over 10 seconds of wait is regarded as unacceptable, as it usually interrupts the user's attention on the task being performed.
Solutions to improve responsiveness
Although numerous other options may exist, the most frequently used and recommended answers to responsiveness issues are:
Optimizing the process that delivers the output by eliminating wasteful, unproductive output from the algorithm or method by which the result is produced.
A decent process management system, giving the highest priority to operations that would otherwise interrupt the user's work flow, such as typing, onscreen buttons, or moving the mouse pointer. Usually there is enough "idle time" in between, for the other operations.
Using idle time to prepare for the operations a user might do next.
Let the user do something productive while the system is busy, for instance, writing information in a form, reading a manual, etc. For instance, in a tabbed browser, the user can read one page while loading another.
Deliver intermediate results, before the operation is finished. For instance, a web page can already be operated before all images are loaded, which will take up the idle time which would otherwise be spent needlessly.
If some waiting is inevitable, a progress indicator can significantly reduce frustration. For short delays, an animated icon might be sufficient. Longer delays are better covered with a progress bar, or, if possible, the system should provide an approximation of the time that an operation is going to take before starting it.
See also
Reliability, availability and serviceability
Elasticity (cloud computing)
Network resilience
Agile construction
Reactive user interface
Responsive web design
References
External links
Chapter 9. Constructing A Responsive User Interface. by David Sweet
Excerpt from the book Usability Engineering (1993) on response time
UI Responsiveness on NetBeans Wiki
Acceptable Response Times from the GNOME Human Interface Guidelines
http://www.baychi.org/calendar/20031111/
Computer architecture
Computer systems
Measurement
Systems engineering
User interface techniques | Responsiveness | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 891 | [
"Systems engineering",
"Physical quantities",
"Computer architecture",
"Computer engineering",
"Quantity",
"Measurement",
"Size",
"Computer systems",
"Computer science",
"Computers"
] |
10,358,680 | https://en.wikipedia.org/wiki/Oparin%20Medal | The Oparin/Urey Medal honours important contributions to the field of origins of life. The medal is awarded by the International Society for the Study of the Origin of Life (ISSOL). The award was originally named for Alexander Ivanovich Oparin, one of the pioneers in researching the origins of life. In 1993, the Society decided to alternate the name of the award so as to also honour the memory of Harold C. Urey, one of the first to propose the study of cosmochemistry.
List of winners
The current list of medalists is shown below:
References
Origin of life | Oparin Medal | [
"Biology"
] | 124 | [
"Biological hypotheses",
"Origin of life"
] |
10,359,432 | https://en.wikipedia.org/wiki/Missense%20mRNA | Missense mRNA is a messenger RNA bearing one or more mutated codons that yield polypeptides with an amino acid sequence different from the wild-type or naturally occurring polypeptide. Missense mRNA molecules are created when template DNA strands or the mRNA strands themselves undergo a missense mutation in which a protein coding sequence is mutated and an altered amino acid sequence is coded for.
Biogenesis
A missense mRNA arises from a missense mutation, in the event of which a DNA nucleotide base pair in the coding region of a gene is changed such that it results in the substitution of one amino acid for another. The point mutation is nonsynonymous because it alters the RNA codon in the mRNA transcript such that translation results in amino acid change. An amino acid change may not result in appreciable changes in protein structure depending on whether the amino acid change is conservative or non-conservative. This owes to the similar physicochemical properties exhibited by some amino acids.
Missense mRNAs may be detected as a result of two different types of point mutations - spontaneous mutations and induced mutations. Spontaneous mutations occur during the DNA replication process where a non-complementary nucleotide is deposited by the DNA polymerase in the extension phase. The consecutive round of replication would result in a point mutation. If the resulting mRNA codon is one that changes the amino acid, a missense mRNA would be detected. A hypergeometric distribution study involving DNA polymerase β replication errors in the APC gene revealed 282 possible substitutions that could result in missense mutations. When the APC mRNA was analyzed in the mutational spectrum, it showed 3 sites where the frequency of substitutions were high.
Induced mutations caused by mutagens can give rise to missense mutations. Nucleoside analogues such as 2-aminopurine and 5-bromouracil can insert in place of A and T respectively. Ionizing radiation like x-rays and γ-rays can deaminate cytosine to uracil.
Missense mRNAs may be applied synthetically in forward and reverse genetic screens used to interrogate the genome. Site-directed mutagenesis is a technique often employed to create knock-in and knock-out models that express missense mRNAs. For example, in knock-in studies, human orthologs are identified in model organisms to introduce missense mutations, or a human gene with a substitution mutation is integrated into the genome of the model organism. The subsequent loss-of-function or gain-of-function phenotypes are measured to model genetic diseases and discover novel drugs. While homologous recombination has been widely used to generate single-base substitutions, novel technologies that co-inject gRNA and hCas9 mRNA of the CRISPR/Cas9 system, in conjunction with single-strand oligodeoxynucleotide (ssODN) donor sequences have shown efficiency in generating point mutations in the genome.
Evolutionary implications
Non-synonymous RNA editing
Substitutions can occur on the level of both the DNA and RNA. RNA editing-dependent amino acid substitutions can produce missense mRNA's of which occur through hydrolytic deaminase reactions. Two of the most prevalent deaminase reactions occur through the Apolipoprotein B mRNA editing enzyme (APOBEC) and the adenosine deaminase acting on RNA enzyme (ADAR) which are responsible for the conversion of cytidine to uridine (C-to-U), and the deamination of adenosine to inosine (A-to-I), respectively. Such selective substitutions of uridine for cytidine, and inosine for adenosine in RNA editing can produce differential isoforms of missense mRNA transcripts, and confer transcriptome diversity and enhanced protein function in response to selective pressures.
See also
Nonsense mutation
Start codon
Stop codon
References
RNA
Molecular biology | Missense mRNA | [
"Chemistry",
"Biology"
] | 813 | [
"Biochemistry",
"Molecular biology"
] |
10,360,813 | https://en.wikipedia.org/wiki/Runaway%20electrons | The term runaway electrons (RE) is used to denote electrons that undergo free fall acceleration into the realm of relativistic particles. REs may be classified as thermal (lower energy) or relativistic. The study of runaway electrons is thought to be fundamental to our understanding of High-Energy Atmospheric Physics. They are also seen in tokamak fusion devices, where they can damage the reactors.
Lightning
Runaway electrons are the core element of the runaway breakdown based theory of lightning propagation. Since C.T.R. Wilson's work in 1925, research has been conducted to study the possibility of runaway electrons, cosmic ray based or otherwise, initiating the processes required to generate lightning.
Extraterrestrial Occurrence
Electron runaway based lightning may be occurring on the four giant planets in addition to Earth. Simulated studies predict runaway breakdown processes are likely to occur on these gaseous planets far more easily on earth, as the threshold for runaway breakdown to begin is far smaller.
High Energy Plasma
The runaway electron phenomenon has been observed in high energy plasmas. They can pose a threat to machines and experiments in which these plasmas exist, including ITER. Several studies exist examining the properties of runaway electrons in these environments (tokamak), searching to better suppress the detrimental effects of these unwanted runaway electrons. Recent measurements reveal higher-than-expected impurity ion diffusion in runaway electron plateaus, possibly due to turbulence. The choice between low and high atomic number (Z) gas injections for disruption mitigation techniques requires a better understanding of the impurity ion transport, as these ions may not completely mix at impact, affecting the prevention of runaway electron wall damage in large tokamak concepts, like ITER.
Computer and Numerical Simulations
This highly complex phenomenon has proved difficult to model with traditional systems, but has been modelled in part with the world's most powerful supercomputer.
In addition, aspects of electron runaway have been simulated using the popular particle physics modelling module Geant4.
Space Based Experiments
TARANIS (CNES)
ASIM (ESA)
References
Particle physics
Magnetic confinement fusion | Runaway electrons | [
"Physics"
] | 421 | [
"Particle physics"
] |
18,400,581 | https://en.wikipedia.org/wiki/Chelating%20resin | Chelating resins are a class of ion-exchange resins. They are almost always used to bind cations, and utilize chelating agents covalently attached to a polymer matrix. Chelating resins have the same bead form and polymer matrix as usual ion exchangers. Their main use is for pre-concentration of metal ions in a dilute solution. Chelating ion-exchange resins are used for brine decalcification in the chlor-alkali industry, the removal of boron from potable water, and the recovery of precious metals in solutions.
Properties and structure
Chelating resins operate similarly to ordinary ion-exchange resins.
Most chelating resins are polymers (copolymers to be precise) with reactive functional groups that chelate to metal ions. The variation in chelating resins arises from the nature of the chelating agents pendant from the polymer backbone. Dowex chelating resin A-1, also known as Chelex 100, is based on iminodiacetic acid in a styrene-divinylbenzene matrix. Dowex A-1 is available commercially and is widely used to determine general properties of chelating resins such as rate determining step and pH dependence, etc. Dowex A-1 is produced from chloromethylated styrene-divinylbenzene copolymer via amination with aminodiacetic acid.
Poly metal chelating resin has almost negligible affinity to both alkali and alkaline earth metals; small quantities of resin can be utilized to concentrate trace metals in natural water systems or biological fluids, in which there are three or four orders of magnitude greater alkali and alkaline earth metal concentration than the trace metal concentrations.
Other functional groups bound to chelating resins are aminophosphonic acids, thiourea, and 2-picolylamine.
Application in heavy metal remediation
Soil contaminated with heavy metals including radionuclides is mitigated primarily using chelating resins.
Chelating polymers (ion-exchange resins) were proposed for maintenance therapy of pathologies accompanied by iron accumulation, such as hereditary hemochromatosis (iron overload) or Wilson's disease (copper overload), by chelating the metal ions in GIT and thus limiting its biological availability.
References
Additional resources
Yang, Dong, Xijun Chang, Yongwen Liu, and Sui Wang. "Synthesis and Efficiency of a Spherical Macroporous Epoxy-Polyamide Chelating Resin for Preconcentrating and Separating Trace Noble Metal Ions." Annali di Chimica 95.1-2 (2005): 111-14.
Zougagh, Mohammed, J. M. Cano Pav N, and A. Garcia De Torres. "Chelating Sorbents Based on Silica Gel and Their Application in Atomic Spectrometry." Anal Bioanal Chem Analytical and Bioanalytical Chemistry 381.6 (2005): 1103-113.
R. R. Greenberg" and H. M. Kingston. “Trace Element Analysis of Natural Water Samples by Neutron Activation Analysis with Chelating Resin.” Center for Analytical Chemistry, National Bureau of Standards, Washington, D.C. 20234.
.
Analytical chemistry
Polymers
Resins
Chelating agents | Chelating resin | [
"Physics",
"Chemistry",
"Materials_science"
] | 682 | [
"Resins",
"Unsolved problems in physics",
"Polymer chemistry",
"nan",
"Chelating agents",
"Polymers",
"Amorphous solids",
"Process chemicals"
] |
18,401,357 | https://en.wikipedia.org/wiki/Light%20clay | Light clay (also light straw clay, light clay straw, slipstraw) is a natural building material used to infill between a wooden frame in a timber framed building using a combination of clay and straw, woodchips or some other lighter material.
History
A mixture of clay and straw was used as an infill material for timber framed building from at least the 12th century in Germany and elsewhere in Europe. The term "light clay" or "light straw-clay" derives from the German name . Renewed interest in traditional building methods developed from the 1980s after which various natural building architects and builders started promoting the use of light clay. An appendix for light straw-clay was added to the International Residential Code beginning with the 2015 edition.
Usage
Local clay, often local subsoil, is mixed into a slurry with water and then combined with straw or wood chip or other similar material. Wood chips can vary in size from sawdust to in diameter. The ratio of clay to other ingredients can be adapted to either increase thermal mass or insulation properties. The mixture is provided with additional structural strength using wattles. When used externally it can be protected with a Lime render or a clay render. A plaster or render yields a smooth, finished appearance.
See also
Adobe
Cob
Wattle and daub
Wychert
References
Soil-based building materials
Sustainable building
Appropriate technology
Natural materials
Sustainable products | Light clay | [
"Physics",
"Engineering"
] | 281 | [
"Sustainable building",
"Natural materials",
"Building engineering",
"Construction",
"Materials",
"Matter"
] |
18,404,411 | https://en.wikipedia.org/wiki/Commutation%20theorem%20for%20traces | In mathematics, a commutation theorem for traces explicitly identifies the commutant of a specific von Neumann algebra acting on a Hilbert space in the presence of a trace.
The first such result was proved by Francis Joseph Murray and John von Neumann in the 1930s and applies to the von Neumann algebra generated by a discrete group or by the dynamical system associated with a measurable transformation preserving a probability measure.
Another important application is in the theory of unitary representations of unimodular locally compact groups, where the theory has been applied to the regular representation and other closely related representations. In particular this framework led to an abstract version of the Plancherel theorem for unimodular locally compact groups due to Irving Segal and Forrest Stinespring and an abstract Plancherel theorem for spherical functions associated with a Gelfand pair due to Roger Godement. Their work was put in final form in the 1950s by Jacques Dixmier as part of the theory of Hilbert algebras.
It was not until the late 1960s, prompted partly by results in algebraic quantum field theory and quantum statistical mechanics due to the school of Rudolf Haag, that the more general non-tracial Tomita–Takesaki theory was developed, heralding a new era in the theory of von Neumann algebras.
Commutation theorem for finite traces
Let H be a Hilbert space and M a von Neumann algebra on H with a unit vector Ω such that
M Ω is dense in H
M ' Ω is dense in H, where M ' denotes the commutant of M
(abΩ, Ω) = (baΩ, Ω) for all a, b in M.
The vector Ω is called a cyclic-separating trace vector. It is called a trace vector because the last condition means that the matrix coefficient corresponding to Ω defines a tracial state on M. It is called cyclic since Ω generates H as a topological M-module. It is called separating
because if aΩ = 0 for a in M, then aMΩ= (0), and hence a = 0.
It follows that the map
for a in M defines a conjugate-linear isometry of H with square the identity, J2 = I. The operator J is usually called the modular conjugation operator.
It is immediately verified that JMJ and M commute on the subspace M Ω, so that
The commutation theorem of Murray and von Neumann states that
{| border="1" cellspacing="0" cellpadding="5"
|
|}
One of the easiest ways to see this is to introduce K, the closure of the real
subspace Msa Ω, where Msa denotes the self-adjoint elements in M. It follows that
an orthogonal direct sum for the real part of the inner product. This is just the real orthogonal decomposition for the ±1 eigenspaces of J.
On the other hand for a in Msa and b in Msa, the inner product (abΩ, Ω) is real, because ab is self-adjoint. Hence K is unaltered if M is replaced by M '.
In particular Ω is a trace vector for M and J is unaltered if M is replaced by M '. So the opposite inclusion
follows by reversing the roles of M and M.
Examples
One of the simplest cases of the commutation theorem, where it can easily be seen directly, is that of a finite group Γ acting on the finite-dimensional inner product space by the left and right regular representations λ and ρ. These unitary representations are given by the formulas for f in and the commutation theorem implies that The operator J is given by the formula Exactly the same results remain true if Γ is allowed to be any countable discrete group. The von Neumann algebra λ(Γ)' ' is usually called the group von Neumann algebra of Γ.
Another important example is provided by a probability space (X, μ). The Abelian von Neumann algebra A = L∞(X, μ) acts by multiplication operators on H = L2(X, μ) and the constant function 1 is a cyclic-separating trace vector. It follows that so that A is a maximal Abelian subalgebra of B(H), the von Neumann algebra of all bounded operators on H.
The third class of examples combines the above two. Coming from ergodic theory, it was one of von Neumann's original motivations for studying von Neumann algebras. Let (X, μ) be a probability space and let Γ be a countable discrete group of measure-preserving transformations of (X, μ). The group therefore acts unitarily on the Hilbert space H = L2(X, μ) according to the formula for f in H and normalises the Abelian von Neumann algebra A = L∞(X, μ). Let a tensor product of Hilbert spaces. The group–measure space construction or crossed product von Neumann algebra is defined to be the von Neumann algebra on H1 generated by the algebra and the normalising operators . The vector is a cyclic-separating trace vector. Moreover the modular conjugation operator J and commutant M ' can be explicitly identified.
One of the most important cases of the group–measure space construction is when Γ is the group of integers Z, i.e. the case of a single invertible
measurable transformation T. Here T must preserve the probability measure μ. Semifinite traces are required to handle the case when T (or more generally Γ) only preserves an infinite equivalent measure; and the full force of the Tomita–Takesaki theory is required when there is no invariant measure in the equivalence class, even though the equivalence class of the measure is preserved by T (or Γ).
Commutation theorem for semifinite traces
Let M be a von Neumann algebra and M+ the set of positive operators in M. By definition, a semifinite trace (or sometimes just trace) on M is a functional τ from M+ into [0, ∞] such that
for a, b in M+ and λ, μ ≥ 0 ();
for a in M+ and u a unitary operator in M (unitary invariance);
τ is completely additive on orthogonal families of projections in M (normality);
each projection in M is as orthogonal direct sum of projections with finite trace (semifiniteness).
If in addition τ is non-zero on every non-zero projection, then τ is called a faithful trace.
If τ is a faithful trace on M, let H = L2(M, τ) be the Hilbert space completion of the inner product space
with respect to the inner product
The von Neumann algebra M acts by left multiplication on H and can be identified with its image. Let
for a in M0. The operator J is again called the modular conjugation operator and extends to a conjugate-linear isometry of H satisfying J2 = I. The commutation theorem of Murray and von Neumann
{| border="1" cellspacing="0" cellpadding="5"
|
|}
is again valid in this case. This result can be proved directly by a variety of methods, but follows immediately from the result for finite traces, by repeated use of the following elementary fact:
If M1 ⊇ M2 are two von Neumann algebras such that pn M1 = pn M2 for a family of projections pn in the commutant of M1 increasing to I in the strong operator topology, then M1 = M2.
Hilbert algebras
The theory of Hilbert algebras was introduced by Godement (under the name "unitary algebras"), Segal and Dixmier to formalize the classical method of defining the trace for trace class operators starting from Hilbert–Schmidt operators. Applications in the representation theory of groups naturally lead to examples of Hilbert algebras. Every von Neumann algebra endowed with a semifinite trace has a canonical "completed" or "full" Hilbert algebra associated with it; and conversely a completed Hilbert algebra of exactly this form can be canonically associated with every Hilbert algebra. The theory of Hilbert algebras can be used to deduce the commutation theorems of Murray and von Neumann; equally well the main results on Hilbert algebras can also be deduced directly from the commutation theorems for traces. The theory of Hilbert algebras was generalised by Takesaki as a tool for proving commutation theorems for semifinite weights in Tomita–Takesaki theory; they can be dispensed with when dealing with states.
Definition
A Hilbert algebra is an algebra with involution x→x* and an inner product (,) such that
(a, b) = (b*, a*) for a, b in ;
left multiplication by a fixed a in is a bounded operator;
* is the adjoint, in other words (xy, z) = (y, x*z);
the linear span of all products xy is dense in .
Examples
The Hilbert–Schmidt operators on an infinite-dimensional Hilbert space form a Hilbert algebra with inner product (a, b) = Tr (b*a).
If (X, μ) is an infinite measure space, the algebra L∞ (X) L2(X) is a Hilbert algebra with the usual inner product from L2(X).
If M is a von Neumann algebra with faithful semifinite trace τ, then the *-subalgebra M0 defined above is a Hilbert algebra with inner product (a, b) = τ(b*a).
If G is a unimodular locally compact group, the convolution algebra L1(G)L2(G) is a Hilbert algebra with the usual inner product from L2(G).
If (G, K) is a Gelfand pair, the convolution algebra L1(K\G/K)L2(K\G/K) is a Hilbert algebra with the usual inner product from L2(G); here Lp(K\G/K) denotes the closed subspace of K-biinvariant functions in Lp(G).
Any dense *-subalgebra of a Hilbert algebra is also a Hilbert algebra.
Properties
Let H be the Hilbert space completion of with respect to the inner product and let J denote the extension of the involution to a conjugate-linear involution of H. Define a representation λ and an anti-representation ρ of
on itself by left and right multiplication:
These actions extend continuously to actions on H. In this case the commutation theorem for Hilbert algebras states that
{| border="1" cellspacing="0" cellpadding="5"
|
|}
Moreover if
the von Neumann algebra generated by the operators λ(a), then
{| border="1" cellspacing="0" cellpadding="5"
|
|}
These results were proved independently by and .
The proof relies on the notion of "bounded elements" in the Hilbert space completion H.
An element of x in H is said to be bounded (relative to ) if the map a → xa of into H extends to a
bounded operator on H, denoted by λ(x). In this case it is straightforward to prove that:
Jx is also a bounded element, denoted x*, and λ(x*) = λ(x)*;
a → ax is given by the bounded operator ρ(x) = Jλ(x*)J on H;
M ' is generated by the ρ(x)'s with x bounded;
λ(x) and ρ(y) commute for x, y bounded.
The commutation theorem follows immediately from the last assertion. In particular
The space of all bounded elements forms a Hilbert algebra containing as a dense *-subalgebra. It is said to be completed or full because any element in H bounded relative to must actually already lie in . The functional τ on M+ defined by
if x = λ(a)*λ(a) and ∞ otherwise, yields a faithful semifinite trace on M with
Thus:
{| border="1" cellspacing="0" cellpadding="5"
|There is a one-one correspondence between von Neumann algebras on H with faithful semifinite trace and full Hilbert algebras with Hilbert space completion H.
|}
See also
von Neumann algebra
Affiliated operator
Tomita–Takesaki theory
Notes
References
(English translation)
(English translation)
(Section 5)
Von Neumann algebras
Representation theory of groups
Ergodic theory
Theorems in functional analysis
Theorems in representation theory | Commutation theorem for traces | [
"Mathematics"
] | 2,618 | [
"Theorems in mathematical analysis",
"Theorems in functional analysis",
"Ergodic theory",
"Dynamical systems"
] |
18,404,729 | https://en.wikipedia.org/wiki/Quantum%20Aspects%20of%20Life | Quantum Aspects of Life, a book published in 2008 with a foreword by Roger Penrose, explores the open question of the role of quantum mechanics at molecular scales of relevance to biology. The book contains chapters written by various world-experts from a 2003 symposium and includes two debates from 2003 to 2004; giving rise to a mix of both sceptical and sympathetic viewpoints. The book addresses questions of quantum physics, biophysics, nanoscience, quantum chemistry, mathematical biology, complexity theory, and philosophy that are inspired by the 1944 seminal book What Is Life? by Erwin Schrödinger.
Contents
Foreword by Roger Penrose
Section 1: Emergence and Complexity
Chapter 1: "A Quantum Origin of Life?" by Paul C. W. Davies
Chapter 2: "Quantum Mechanics and Emergence" by Seth Lloyd
Section 2: Quantum Mechanisms in Biology
Chapter 3: "Quantum Coherence and the Search for the First Replicator" by Jim Al-Khalili and Johnjoe McFadden
Chapter 4: "Ultrafast Quantum Dynamics in Photosynthesis" by Alexandra Olaya-Castro, Francesca Fassioli Olsen, Chiu Fan Lee, and Neil F. Johnson
Chapter 5: "Modeling Quantum Decoherence in Biomolecules" by Jacques Bothma, Joel Gilmore, and Ross H. McKenzie
Section 3: The Biological Evidence
Chapter 6: "Molecular Evolution: A Role for Quantum Mechanics in the Dynamics of Molecular Machines that Read and Write DNA" by Anita Goel
Chapter 7: "Memory Depends on the Cytoskeleton, but is it Quantum?" by Andreas Mershin and Dimitri V. Nanopoulos
Chapter 8: "Quantum Metabolism and Allometric Scaling Relations in Biology" by Lloyd Demetrius
Chapter 9: "Spectroscopy of the Genetic Code" by Jim D. Bashford and Peter D. Jarvis
Chapter 10: "Towards Understanding the Origin of Genetic Languages" by Apoorva D. Patel
Section 4: Artificial Quantum Life
Chapter 11: "Can Arbitrary Quantum Systems Undergo Self-Replication?" by Arun K. Pati and Samuel L. Braunstein
Chapter 12: "A Semi-Quantum Version of the Game of Life" by Adrian P. Flitney and Derek Abbott
Chapter 13: "Evolutionary Stability in Quantum Games" by Azhar Iqbal and Taksu Cheon
Chapter 14: "Quantum Transmemetic Intelligence" by Edward W. Piotrowski and Jan Sładkowski
Section 5: The Debate
Chapter 15: "Dreams versus Reality: Plenary Debate Session on Quantum Computing" For panel: Carlton M. Caves, Daniel Lidar, Howard Brandt, Alexander R. Hamilton; Against panel: David K. Ferry, Julio Gea-Banacloche, Sergey M. Bezrukov, Laszlo B. Kish; Debate chair: Charles R. Doering; Transcript Editor: Derek Abbott.
Chapter 16: "Plenary Debate: Quantum Effects in Biology: Trivial or Not?" For panel: Paul C. W. Davies, Stuart Hameroff, Anton Zeilinger, Derek Abbott; Against panel: Jens Eisert, Howard M. Wiseman, Sergey M. Bezrukov, Hans Frauenfelder; Debate chair: Julio Gea-Banacloche; Transcript Editor: Derek Abbott.
Chapter 17: "Non-trivial Quantum Effects in Biology: A Skeptical Physicist’s View" Howard M. Wiseman and Jens Eisert
Chapter 18: "That’s Life! — The Geometry of Electron Clouds" Stuart Hameroff
See also
Quantum biology
References
External links
Book's homepage at ICP
2008 non-fiction books
Biology books
Biophysics
Mathematical and theoretical biology
Popular physics books
Quantum biology
Books by Paul Davies | Quantum Aspects of Life | [
"Physics",
"Mathematics",
"Biology"
] | 748 | [
"Applied and interdisciplinary physics",
"Mathematical and theoretical biology",
"Applied mathematics",
"Quantum mechanics",
"Biophysics",
"nan",
"Quantum biology"
] |
22,085,817 | https://en.wikipedia.org/wiki/Polyakov%20formula | In differential geometry and mathematical physics (especially string theory), the Polyakov formula expresses the conformal variation of the zeta functional determinant of a Riemannian manifold. Proposed by Alexander Markovich Polyakov this formula arose in the study of the quantum theory of strings. The corresponding density is local, and therefore is a Riemannian curvature invariant. In particular, whereas the functional determinant itself is prohibitively difficult to work with in general, its conformal variation can be written down explicitly.
References
Conformal geometry
Spectral theory
String theory | Polyakov formula | [
"Astronomy",
"Mathematics"
] | 113 | [
"String theory",
"Astronomical hypotheses",
"Geometry",
"Geometry stubs"
] |
22,086,557 | https://en.wikipedia.org/wiki/Sleeping%20berth | A sleeping berth is a bed or sleeping accommodation on vehicles. Space accommodations have contributed to certain common design elements of berths.
Beds in boats or ships
While beds on large ships are little different from those on shore, the lack of space on smaller yachts means that bunks must be fit in wherever possible. Some of these berths have specific names:
V-berth
Frequently, yachts have a bed in the extreme forward end of the hull (usually in a separate cabin called the forepeak). Because of the shape of the hull, this bed is basically triangular, though most also have a triangular notch cut out of the middle of the aft end, splitting it partially into two separate beds and making it more of a V shape, hence the name. This notch can usually be filled in with a detachable board and cushion, creating something more like a double bed (though with drastically reduced space for the feet; wide is typical). The term "V-berth" is not widely used in the UK; instead, the cabin as a whole (the forepeak) is usually referred to.
Settee berth
The archetypal layout for a small yacht has seats running down both sides of the cabin, with a table in the middle. At night, these seats can usually be used as beds. Because the ideal ergonomic distance between a seat-back and its front edge (back of the knee) makes for a rather narrow bed, good settee berths will have a system for moving the back of the settee out of the way; this can reveal a surprisingly wide bunk, often running right out to the hull side underneath the lockers. If they are to be used at sea, settee berths must have lee-cloths to prevent the user falling out of bed. Sometimes the settee forms part of a double bed for use in harbor, often using detachable pieces of the table and extra cushions. Such beds are not usually referred to as settee berths.
Pilot berth
A narrow berth high up in the side of the cabin, the pilot berth is usually above and behind the back of the settee and right up under the deck. Sometimes the side of this bunk is "walled in" up to the sleeper's chest; there may even be small shelves or lockers on the partition so that the bed is "behind the furniture". The pilot berth is so called because originally they were so small and uncomfortable that nobody slept in them most of the time; only the pilot would be offered it if it were necessary to spend a night aboard the yacht.
Quarter berth
This is a single bunk tucked under the cockpit, usually found in smaller boats where there is not room for a cabin in this location.
Pipe Berth
A pipe berth is a canvas cloth laced to a perimeter frame made of pipe. Easily stored due to its flat shape, the pipe berth is often suspended on ropes or fits into brackets when in use. The canvas dries more easily than a mattress.
Root Berth
A Root Berth is like a pipe berth but with the pipes on only the long sides. Root Berths easily roll up for storage. Some use heavy wooden dowels instead of pipes, again fitting into brackets when in use. Some boats provide multiple bracket options, so the canvas can be pulled tight like in a pipe berth, or left looser for a more "hammock-like" berth, helpful in heeling boats or heavy seas.
Lee cloths
Lee cloths are sheets of canvas or other fabric attached to the open side of the bunk (very few are open all round) and usually tucked under the mattress during the day or when sleeping in harbour. The lee cloth keeps the sleeping person in the bunk from falling out when the boat heels during sailing or rough weather.
Berths in trains
Long-distance trains running at night usually have sleeping compartments with sleeping berths. In the case of compartments with two berths, one is on top of the other in a double-bunk arrangement. These beds (the lower bed in a double-bunk arrangement) are usually designed in conjunction with seats which occupy the same space, and each can be folded away when the other is in use.
Sleeper trains are common, especially in Europe, India and China. Sleeper trains usually consist of single or double-berth compartments, as well as couchette, which have four or six berths (consisting of a bottom, middle and top bunk on each side of the compartment).
Open section berths
These berths are clustered in compartments, contrasting with the berths in the open sections of Pullman cars in the United States, common until the 1950s. In these cars, passengers face each other in facing seats during the day. Porters pull down the upper berth and bring the lower seats together to create the lower berth. All of these berths face the aisle running down the center of the sleeping car. Each berth has a curtain for privacy away from the aisle.
Berths in long-distance trucks
Long-haul truckers sleep in berths known as sleeper cabs contained within their trucks. The sleeper-berth's size and location is typically regulated.
See also
Couchette car
Pullman car
References
Beds
Nautical terminology
Passenger rail transport
Ship compartments | Sleeping berth | [
"Biology"
] | 1,061 | [
"Beds",
"Behavior",
"Sleep"
] |
22,090,355 | https://en.wikipedia.org/wiki/Bovista%20nigrescens | Bovista nigrescens, commonly referred to as the brown puffball or black bovist, is an edible cream white or brown puffball. Phylogenetic relationships between Bovista nigrescens and species of Lycoperdaceae were established based on ITS and LSU sequence data from north European taxa.
Description
The fruit body of Bovista nigrescens is across. The roughly spherical fruit body is slightly pointed at the bottom. Although it lacks a sterile base, the fruit body is attached to the substrate by a single mycelial cord which often breaks, leaving the fruit body free to roll about in the wind. The outer wall is white at first, but soon flakes off in large scales at maturity to expose the dark purple-brown to blackish inner wall that encloses the spore mass. These spores leave via an apical pore, which is caused by extensive splitting and cracking. The gleba is often dark purple-brown. The capillitium is highly branched with brown dendroid elements. Spores are brown and ovoid, with a diameter of 4.5–6 μm. They are thick-walled, and nearly smooth, with a central oil droplet, and a long, warted pedicel.
Habitat and distribution
Bovista nigrescens puffballs are often found in grass and pastureland. Although they are found most abundantly in late summer to autumn, they persist in old dried condition for many months. They are uncommon in most areas, but frequent in North and West Europe. They are edible when young. In addition, they are found on the ground, fields, lawns or on roadsides. Typically, they may be found at an altitude of up to .
Uses
The young specimens can be halved and cooked.
References
Agaricaceae
Fungi of Europe
Edible fungi
Puffballs
Fungi described in 1794
Fungus species | Bovista nigrescens | [
"Biology"
] | 386 | [
"Fungi",
"Fungus species"
] |
22,092,654 | https://en.wikipedia.org/wiki/Protomap%20%28neuroscience%29 | The Protomap is a primordial molecular map of the functional areas of the mammalian cerebral cortex during early embryonic development, at a stage when neural stem cells are still the dominant cell type. The protomap is a feature of the ventricular zone, which contains the principal cortical progenitor cells, known as radial glial cells. Through a process called 'cortical patterning', the protomap is patterned by a system of signaling centers in the embryo, which provide positional information and cell fate instructions. These early genetic instructions set in motion a development and maturation process that gives rise to the mature functional areas of the cortex, for example the visual, somatosensory, and motor areas.
The term protomap was coined by Pasko Rakic. The protomap hypothesis was opposed by the protocortex hypothesis, which proposes that cortical proto-areas initially have the same potential, and that regionalization in large part is controlled by external influences, such as axonal inputs from the thalamus to the cortex. However, a series of papers in the year 2000 and in 2001 provided strong evidence against the protocortex hypothesis, and the protomap hypothesis has been well accepted since then. The protomap hypothesis, together with the related radial unit hypothesis, forms our core understanding of the embryonic development of the cerebral cortex. Once the basic structure is present and cortical neurons have migrated to their final destinations, many other processes contribute to the maturation of functional cortical circuits.
See also
Radial unit hypothesis
Neural stem cell
Stem cell
Neurogenesis
Cellular differentiation
Cortical patterning
Gyrification
References
Neuroanatomy
Neuroscience | Protomap (neuroscience) | [
"Biology"
] | 352 | [
"Neuroscience"
] |
1,811,568 | https://en.wikipedia.org/wiki/Wacker%20process | The Wacker process or the Hoechst-Wacker process (named after the chemical companies of the same name) refers to the oxidation of ethylene to acetaldehyde in the presence of palladium(II) chloride and copper(II) chloride as the catalyst. This chemical reaction was one of the first homogeneous catalysis with organopalladium chemistry applied on an industrial scale.
History
The Wacker reaction was first reported by Smidt et al.
The development of the chemical process now known as the Wacker process began in 1956 at Wacker Chemie. At the time, many industrial compounds were produced from acetylene, derived from calcium carbide, an expensive and environmentally unfriendly technology. The construction of a new oil refinery in Cologne by Esso close to a Wacker site, combined with the realization that ethylene would be a cheaper feedstock prompted Wacker to investigate its potential uses. As part of the ensuing research effort, a reaction of ethylene and oxygen over palladium on carbon in a quest for ethylene oxide unexpectedly gave evidence for the formation of acetaldehyde (simply based on smell). More research into this ethylene to acetaldehyde conversion resulted in a 1957 patent describing a gas-phase reaction using a heterogeneous catalyst. In the meanwhile Hoechst AG joined the race and after a patent filing forced Wacker into a partnership called Aldehyd GmbH. The heterogeneous process ultimately failed due to catalyst inactivation and was replaced by the water-based homogeneous system for which a pilot plant was operational in 1958. Problems with the aggressive catalyst solution were solved by adopting titanium (newly available for industrial use) as construction material for reactors and pumps. Production plants went into operation in 1960.
Reaction mechanism
The reaction mechanism for the industrial Wacker process (olefin oxidation via palladium(II) chloride) has received significant attention for several decades. Aspects of the mechanism are still debated. A modern formulation is described below:
The initial stoichiometric reaction was first reported by Francis Clifford Phillips in his doctoral dissertation on the composition of Pennsylvanian natural gas defended in 1893. This net reaction can also be described as follows:
[PdCl4]2 − + C2H4 + H2O → CH3CHO + Pd + 2 HCl + 2 Cl−
This conversion is followed by reactions that regenerate the Pd(II) catalyst:
Pd + 2 CuCl2 + 2 Cl − → [PdCl4]2− + 2 CuCl
2 CuCl + O2 + 2 HCl → 2 CuCl2 + H2O
Only the alkene and oxygen are consumed. Without copper(II) chloride as an oxidizing agent, Pd(0) metal (resulting from beta-hydride elimination of Pd(II) in the final step) would precipitate, stopping Philips' reaction after one cycle. Air, pure oxygen, or a number of other reagents can then oxidise the resultant CuCl-chloride mixture back to CuCl2, allowing the cycle to continue.
Historical mechanistic studies
Early mechanistic studies from the 1960s elucidated several key points:
No H/D exchange effects seen in this reaction. Experiments using C2D4 in water generate CD3CDO, and runs with C2H4 in D2O generate CH3CHO. Thus, keto-enol tautomerization is not a possible mechanistic step.
Negligible kinetic isotope effect with fully deuterated reactants (=1.07). Hence, it is inferred that hydride transfer is not rate-determining.
Significant competitive isotope effect with C2H2D2, (= ~1.9), suggests that rate determining step be prior to formation of acetaldehyde.
High concentrations of chloride and copper(II) chloride favor formation of a new product, chlorohydrin.
Many mechanistic studies on the Wacker process have focused on pathway for formation of the C-O bond, the hydroxypalladation step. Henry inferred that coordinated hydroxide attacks the ethylene ligand, an internal (syn-) pathway. Later, stereochemical studies by Stille and coworkers support an anti-addition pathway, whereby free hydroxide attacks the ethylene ligand. The conditions for Stille's experiments differ significantly from industrial process conditions. Other studies using normal industrial Wacker conditions (except with high chloride and high copper chloride concentrations) also yielded products that inferred nucleophilic attack was an anti-addition reaction.
Kinetic studies were conducted on isotopically substituted allyl alcohols at standard industrial conditions (with low-chloride concentrations) to probe the reaction mechanisms. Those results showed that nucleophilic attack is a slow process, while the proposed mechanisms explaining the earlier stereochemical studies assumed nucleophilic attack to be a fast process.
Subsequent stereochemical studies indicated that both pathways occur and are dependent on chloride concentrations. However, these studies too are disputed since allyl-alcohols may be sensitive to isomerization reactions, and different stereoisomers may be formed from those reactions and not from the standard Wacker process.
In summary, experimental evidence seems to support that syn-addition occurs under low-chloride reaction concentrations (< 1 mol/L, industrial process conditions), while anti-addition occurs under high-chloride (> 3mol/L) reaction concentrations, probably due to chloride ions saturating the catalyst and inhibiting the inner-sphere mechanism. However, the exact pathway and the reason for this switching of pathways is still unknown.
Further complicating the Wacker process mechanism is questions about the role of copper chloride. Most theories assumed copper does not play a role in the olefin oxidation mechanisms. Yet, experiments by Stangl and Jira found chlorohydrin formation was dependent on copper chloride concentrations. Work by Hosokawa and coworkers yielded a crystallized product containing copper chloride, indicating it may have a non-innocent role in olefin oxidation. Finally, an ab initio study by Comas-Vives, et al. involving no copper co-catalyst found anti-addition was the preferred pathway. This pathway was later confirmed by copper-free experiments by Anderson and Sigman. A different kinetic rate law with no proton dependence was found under copper-free conditions, indicating the possibility that even small amounts of copper co-catalysts may have non-innocent roles on this chemistry. While these works complicate the picture of the Wacker process mechanism, one should probably infer that this and related chemistry can be sensitive to reaction conditions, and multiple different reaction pathways may be in play.
Another key step in the Wacker process is the migration of the hydrogen from oxygen to chloride and formation of the C-O double bond. This step is generally thought to proceed through a so-called β-hydride elimination with a cyclic four-membered transition state:
In silico studies argue that the transition state for this reaction step is unfavorable and an alternative reductive elimination reaction mechanism is in play. The proposed reaction steps are likely assisted by water molecule in solution acting as a catalyst.
Industrial process
Two routes are commercialized for the production of acetaldehyde: one-stage process and two-stage.
One-stage process
Ethene and oxygen are passed co-currently in a reaction tower at about 130 °C and 400 kPa. The catalyst is an aqueous solution of PdCl2 and CuCl2. The acetaldehyde is purified by extractive distillation followed by fractional distillation. Extractive distillation with water removes the lights ends having lower boiling points than acetaldehyde (chloromethane, chloroethane, and carbon dioxide) at the top, while water and higher-boiling byproducts, such as acetic acid, crotonaldehyde or chlorinated acetaldehydes, are withdrawn together with acetaldehyde at the bottom.
Due to the corrosive nature of catalyst, the reactor is lined with acid-proof ceramic material and the tubing is made of titanium.
Two-stage process
In two-stage process, reaction and oxidation are carried out separately in tubular reactors. Unlike one-stage process, air can be used instead of oxygen. Ethylene is passed through the reactor along with catalyst at 105–110 °C and 900–1000 kPa. Catalyst solution containing acetaldehyde is separated by flash distillation. The catalyst is oxidized in the oxidation reactor at 1000 kPa using air as oxidizing medium. Oxidized catalyst solution is separated and sent back to reactor. Oxygen from air is used up completely and the exhaust air is circulated as inert gas. Acetaldehyde – water vapor mixture is preconcentrated to 60–90% acetaldehyde by utilizing the heat of reaction and the discharged water is returned to the flash tower to maintain catalyst concentration. A two-stage distillation of the crude acetaldehyde follows. In the first stage, low-boiling substances, such as chloromethane, chloroethane and carbon dioxide, are separated. In the second stage, water and higher-boiling by-products, such as chlorinated acetaldehydes and acetic acid, are removed and acetaldehyde is obtained in pure form overhead.
Due to corrosive nature of the catalyst, the equipments in contact with it are lined with titanium.
In both one- and two-stage processes the acetaldehyde yield is about 95% and the production costs are virtually the same. The advantage of using dilute gases in the two-stage method is balanced by higher investment costs. Both methods yield chlorinated hydrocarbons, chlorinated acetaldehydes, and acetic acid as byproducts. Generally, the choice of method is governed by the raw material and energy situations as well as by the availability of oxygen at a reasonable price.
In general, 100 parts of ethene gives:
95 parts acetaldehyde
1.9 parts chlorinated aldehydes
1.1 parts unconverted ethene
0.8 parts carbon dioxide
0.7 parts acetic acid
0.1 parts chloromethane
0.1 parts ethyl chloride
0.3 parts ethane, methane, crotonaldehyde
and other minor side products
Tsuji-Wacker oxidation
The advent of Wacker Process has spurred on many investigations into the utility and applicability of the reactions to more complex terminal olefins. The Tsuji-Wacker oxidation is the palladium(II)-catalyzed transformation of such olefins into carbonyl compounds. Clement and Selwitz were the first to find that using an aqueous DMF as solvent allowed for the oxidation of 1-dodecene to 2-dodecanone, which addressed the insolubility problem of higher order olefins in water. Fahey noted the use of 3-methylsulfolane in place of DMF as solvent increased the yield of oxidation of 3,3-Dimethylbut-1-ene. Two years after, Tsuji applied the Selwitz conditions for selective oxidations of terminal olefins with multiple functional groups, and demonstrated its utility in synthesis of complex substrates. Further development of the reaction has led to various catalytic systems to address selectivity of the reaction, as well as introduction of intermolecular and intramolecular oxidations with non-water nucleophiles.
Regioselectivity
Markovnikov addition
The Tsuji-Wacker oxidation oxidizes terminal olefin to the corresponding methyl ketone under the Wacker process condition. Almost identical to that of Wacker Process, the proposed catalytic cycle(Figure 1) begins with complexation of PdCl2 and two chloride anions to PdCl4, which then undergoes subsequent ligand exchange of two chloride ligand for water and alkene to form Pd(Cl2)(H2O)(alkene) complex. A water molecule then attacks the olefin regioselectively through an outer sphere mechanism in a Markovnikov fashion, to form the more thermodynamically stable Pd(Cl2)(OH)(-CH2-CHOH-R) complex. Dissociation of a chloride ligand to the three coordinate palladium complex promotes β-hydride elimination, then subsequent 1,2-hydride migratory insertion generates Pd(Cl2)(OH)(-CHOHR-CH3) complex. This undergoes β-hydride elimination to release the ketone, and subsequent reductive elimination produces HCl, water, and palladium(0). Finally palladium(0) is reoxidized to PdCl2 with two equivalents of Cu(II)Cl2, which in turn can be reoxidized by O2.
The oxidation of terminal olefins generally provide the Markovnikov ketone product, however in cases where substrate favors the aldehyde (discussed below), different ligands can be used to enforce the Markovnikov regioselectivity. The use of sparteine as a ligand (Figure 2, A) favors nucleopalladation at the terminal carbon to minimize steric interaction between the palladium complex and substrate. The Quinox-ligated palladium catalyst is used to favor ketone formation when substrate contains a directing group (Figure 2, B). When such substrate bind to Pd(Quinox)(OOtBu), this complex is coordinately saturated which prevents the binding of the directing group, and results in formation of the Markovnikov product. The efficiency of this ligand is also attributed to its electronic property, where anionic TBHP prefers to bind trans to the oxazoline and olefin coordinate trans to the quinoline.
Anti-Markovnikov addition
The anti-Markovnikov addition selectivity to aldehyde can be achieved through exploiting inherent stereoelectronics of the substrate. Placement of directing group at homo-allylic (i.e. Figure 3, A) and allylic position (i.e. Figure 3, B) to the terminal olefin favors the anti-Markovnikov aldehyde product, which suggests that in the catalytic cycle the directing group chelates to the palladium complex such that water attacks at the anti-Markovnikov carbon to generate the more thermodynamically stable palladacycle. Anti-Markovnikov selectivity is also observed in styrenyl substrates (i.e. Figure 3, C), presumably via η4-palladium-styrene complex after water attacks anti-Markovnikov. More examples of substrate-controlled, anti-Markovnikov Tsuji-Wacker Oxidation of olefins are given in reviews by Namboothiri, Feringa, and Muzart.
Grubbs and co-workers paved way for anti-Markovnikov oxidation of stereoelectronically unbiased terminal olefins, through the use of palladium-nitrite system (Figure 2, D). In his system, the terminal olefin was oxidized to the aldehyde with high selectivity through a catalyst-control pathway. The mechanism is under investigation, however evidence suggests it goes through a nitrite radical adds into the terminal carbon to generate the more thermodynamically stable, secondary radical. Grubbs expanded this methodology to more complex, unbiased olefins.
Scope
Oxygen nucleophiles
The intermolecular oxidations of olefins with alcohols as nucleophile typically generate ketals, where as the palladium-catalyzed oxidations of olefins with carboxylic acids as nucleophile generates vinylic or allylic carboxylates. In case of diols, their reactions with alkenes typically generate ketals, whereas reactions of olefins bearing electron-withdrawing groups tend to form acetals.
Palladium-catalyzed intermolecular oxidations of dienes with carboxylic acids and alcohols as donors give 1,4-addition products. In the case of cyclohexadiene (Figure 4, A), Backvall found that stereochemical outcome of product was found to depend on concentration of LiCl. This reaction proceeds by first generating the Pd(OAc)(benzoquinone)(allyl) complex, through anti-nucleopalladation of diene with acetate as nucleophile. The absence of LiCl induces an inner sphere reductive elimination to afford the trans-acetate stereochemistry to give the trans-1,4-adduct. The presence of LiCl displaces acetate with chloride due to its higher binding affinity, which forces an outer sphere acetate attack anti to the palladium, and affords the cis-acetate stereochemistry to give the cis-1,4-adduct. Intramolecular oxidative cyclization: 2-(2-cyclohexenyl)phenol cyclizes to corresponding dihydro-benzofuran (Figure 4, B); 1-cyclohexadiene-acetic acid in presence of acetic acid cyclizes to corresponding lactone-acetate 1,4 adduct (Figure 4, C), with cis and trans selectivity controlled by LiCl presence.
Nitrogen nucleophiles
The oxidative aminations of olefins are generally conducted with amides or imides; amines are thought to be protonated by the acidic medium or to bind the metal center too tightly to allow for the catalytic chemistry to occur. These nitrogen nucleophiles are found to be competent in both intermolecular and intramolecular reactions, some examples are depicted (Figure 5, A, B)
References
Organic redox reactions
Organometallic chemistry
Homogeneous catalysis
Acetylene
Ethylene
Palladium
Name reactions | Wacker process | [
"Chemistry"
] | 3,810 | [
"Catalysis",
"Organic redox reactions",
"Organic reactions",
"Name reactions",
"Homogeneous catalysis",
"Organometallic chemistry"
] |
1,812,188 | https://en.wikipedia.org/wiki/Convective%20overshoot | Convective overshoot is a phenomenon of convection carrying material beyond an unstable region of the atmosphere into a stratified, stable region. Overshoot is caused by the momentum of the convecting material, which carries the material beyond the unstable region.
Deep, moist convection in Earth's atmosphere
One example is thermal columns extending above the top of the equilibrium level (EL) in thunderstorms: unstable air rising from (or near) the surface normally stops rising at the EL (near the tropopause) and spreads out as an anvil cloud; but in the event of a strong updraft, unstable air is carried past the EL as an overshooting top or dome. A parcel of air will stop ascending at the maximum parcel level (MPL). This overshoot is responsible for most of the turbulence experienced in the cruise phase of commercial air flights.
Stellar convection
Convective overshoot also occurs at the boundaries of convective zones in stars. An example of this is at the base of the convection zone in the solar interior. The heat of the Sun's thermonuclear fusion is carried outward by radiation in the deep interior radiation zone and by convective circulation in the outer convection zone, but cool sinking material from the surface penetrates further into the radiative zone than theory would suggest. This affects the heat transfer rate and the temperature of the solar interior which can be indirectly measured by helioseismology. The layer between the Sun's convective and radiative zone is called the tachocline.
Overshooting can have more pronounced effects on the evolution of stars that have a convective core, such as intermediate- and high-mass stars. Convective material that overshoots beyond the core mixes with the surrounding material, causing some of the surrounding material to mix into the core. As a result, the core mass at the end of the main sequence can be larger than would otherwise be expected. This leads to big differences in behaviour on the subgiant and giant branches for intermediate mass stars, and to radical changes in the evolution of massive supergiant stars.
References
Severe weather and convection
Cloud and fog physics
Solar phenomena | Convective overshoot | [
"Physics"
] | 451 | [
"Physical phenomena",
"Stellar phenomena",
"Solar phenomena"
] |
1,812,809 | https://en.wikipedia.org/wiki/Pseudorandom%20generator | In theoretical computer science and cryptography, a pseudorandom generator (PRG) for a class of statistical tests is a deterministic procedure that maps a random seed to a longer pseudorandom string such that no statistical test in the class can distinguish between the output of the generator and the uniform distribution. The random seed itself is typically a short binary string drawn from the uniform distribution.
Many different classes of statistical tests have been considered in the literature, among them the class of all Boolean circuits of a given size.
It is not known whether good pseudorandom generators for this class exist, but it is known that their existence is in a certain sense equivalent to (unproven) circuit lower bounds in computational complexity theory.
Hence the construction of pseudorandom generators for the class of Boolean circuits of a given size rests on currently unproven hardness assumptions.
Definition
Let be a class of functions.
These functions are the statistical tests that the pseudorandom generator will try to fool, and they are usually algorithms.
Sometimes the statistical tests are also called adversaries or distinguishers. The notation in the codomain of the functions is the Kleene star.
A function with is a pseudorandom generator against with bias if, for every in , the statistical distance between the distributions and is at most , where is the uniform distribution on .
The quantity is called the seed length and the quantity is called the stretch of the pseudorandom generator.
A pseudorandom generator against a family of adversaries with bias is a family of pseudorandom generators , where is a pseudorandom generator against with bias and seed length .
In most applications, the family represents some model of computation or some set of algorithms, and one is interested in designing a pseudorandom generator with small seed length and bias, and such that the output of the generator can be computed by the same sort of algorithm.
In cryptography
In cryptography, the class usually consists of all circuits of size polynomial in the input and with a single bit output, and one is interested in designing pseudorandom generators that are computable by a polynomial-time algorithm and whose bias is negligible in the circuit size.
These pseudorandom generators are sometimes called cryptographically secure pseudorandom generators (CSPRGs).
It is not known if cryptographically secure pseudorandom generators exist.
Proving that they exist is difficult since their existence implies P ≠ NP, which is widely believed but a famously open problem.
The existence of cryptographically secure pseudorandom generators is widely believed. This is because it has been proven that pseudorandom generators can be constructed from any one-way function which are believed to exist. Pseudorandom generators are necessary for many applications in cryptography.
The pseudorandom generator theorem shows that cryptographically secure pseudorandom generators exist if and only if one-way functions exist.
Uses
Pseudorandom generators have numerous applications in cryptography. For instance, pseudorandom generators provide an efficient analog of one-time pads. It is well known that in order to encrypt a message m in a way that the cipher text provides no information on the plaintext, the key k used must be random over strings of length |m|. Perfectly secure encryption is very costly in terms of key length. Key length can be significantly reduced using a pseudorandom generator if perfect security is replaced by semantic security. Common constructions of stream ciphers are based on pseudorandom generators.
Pseudorandom generators may also be used to construct symmetric key cryptosystems, where a large number of messages can be safely encrypted under the same key. Such a construction can be based on a pseudorandom function family, which generalizes the notion of a pseudorandom generator.
In the 1980s, simulations in physics began to use pseudorandom generators to produce sequences with billions of elements, and by the late 1980s, evidence had developed that a few common generators gave incorrect results in such cases as phase transition properties of the 3D Ising model and shapes of diffusion-limited aggregates. Then in the 1990s, various idealizations of physics simulations—based on random walks, correlation functions, localization of eigenstates, etc., were used as tests of pseudorandom generators.
Testing
NIST announced SP800-22 Randomness tests to test whether a pseudorandom generator produces high quality random bits. Yongge Wang showed that NIST testing is not enough to detect weak pseudorandom generators and developed statistical distance based testing technique LILtest.
For derandomization
A main application of pseudorandom generators lies in the derandomization of computation that relies on randomness, without corrupting the result of the computation.
Physical computers are deterministic machines, and obtaining true randomness can be a challenge.
Pseudorandom generators can be used to efficiently simulate randomized algorithms with using little or no randomness.
In such applications, the class describes the randomized algorithm or class of randomized algorithms that one wants to simulate, and the goal is to design an "efficiently computable" pseudorandom generator against whose seed length is as short as possible.
If a full derandomization is desired, a completely deterministic simulation proceeds by replacing the random input to the randomized algorithm with the pseudorandom string produced by the pseudorandom generator.
The simulation does this for all possible seeds and averages the output of the various runs of the randomized algorithm in a suitable way.
Constructions
For polynomial time
A fundamental question in computational complexity theory is whether all polynomial time randomized algorithms for decision problems can be deterministically simulated in polynomial time. The existence of such a simulation would imply that BPP = P. To perform such a simulation, it is sufficient to construct pseudorandom generators against the family F of all circuits of size s(n) whose inputs have length n and output a single bit, where s(n) is an arbitrary polynomial, the seed length of the pseudorandom generator is O(log n) and its bias is ⅓.
In 1991, Noam Nisan and Avi Wigderson provided a candidate pseudorandom generator with these properties. In 1997 Russell Impagliazzo and Avi Wigderson proved that the construction of Nisan and Wigderson is a pseudorandom generator assuming that there exists a decision problem that can be computed in time 2O(n) on inputs of length n but requires circuits of size 2Ω(n).
For logarithmic space
While unproven assumption about circuit complexity are needed to prove that the Nisan–Wigderson generator works for time-bounded machines, it is natural to restrict the class of statistical tests further such that we need not rely on such unproven assumptions.
One class for which this has been done is the class of machines whose work space is bounded by .
Using a repeated squaring trick known as Savitch's theorem, it is easy to show that every probabilistic log-space computation can be simulated in space .
Noam Nisan (1992) showed that this derandomization can actually be achieved with a pseudorandom generator of seed length that fools all -space machines.
Nisan's generator has been used by Saks and Zhou (1999) to show that probabilistic log-space computation can be simulated deterministically in space .
This result was improved by William Hoza in 2021 to space .
For linear functions
When the statistical tests consist of all multivariate linear functions over some finite field , one speaks of epsilon-biased generators.
The construction of achieves a seed length of , which is optimal up to constant factors.
Pseudorandom generators for linear functions often serve as a building block for more complicated pseudorandom generators.
For polynomials
proves that taking the sum of small-bias generators fools polynomials of degree .
The seed length is .
For constant-depth circuits
Constant depth circuits that produce a single output bit.
Limitations on probability
The pseudorandom generators used in cryptography and universal algorithmic derandomization have not been proven to exist, although their existence is widely believed. Proofs for their existence would imply proofs of lower bounds on the circuit complexity of certain explicit functions. Such circuit lower bounds cannot be proved in the framework of natural proofs assuming the existence of stronger variants of cryptographic pseudorandom generators.
References
Sanjeev Arora and Boaz Barak, Computational Complexity: A Modern Approach, Cambridge University Press (2009), .
Oded Goldreich, Computational Complexity: A Conceptual Perspective, Cambridge University Press (2008), .
Oded Goldreich, Foundations of Cryptography: Basic Tools, Cambridge University Press (2001), .
Algorithmic information theory
Pseudorandomness
Cryptography | Pseudorandom generator | [
"Mathematics",
"Engineering"
] | 1,791 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
1,813,193 | https://en.wikipedia.org/wiki/Maximum%20entropy%20probability%20distribution | In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class (usually defined in terms of specified properties or measures), then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.
Definition of entropy and differential entropy
If is a continuous random variable with probability density , then the differential entropy of is defined as
If is a discrete random variable with distribution given by
then the entropy of is defined as
The seemingly divergent term is replaced by zero, whenever
This is a special case of more general forms described in the articles Entropy (information theory), Principle of maximum entropy, and differential entropy. In connection with maximum entropy distributions, this is the only one needed, because maximizing will also maximize the more general forms.
The base of the logarithm is not important, as long as the same one is used consistently: Change of base merely results in a rescaling of the entropy. Information theorists may prefer to use base 2 in order to express the entropy in bits; mathematicians and physicists often prefer the natural logarithm, resulting in a unit of "nat"s for the entropy.
However, the chosen measure is crucial, even though the typical use of the Lebesgue measure is often defended as a "natural" choice: Which measure is chosen determines the entropy and the consequent maximum entropy distribution.
Distributions with measured constants
Many statistical distributions of applicable interest are those for which the moments or other measurable quantities are constrained to be constants. The following theorem by Ludwig Boltzmann gives the form of the probability density under these constraints.
Continuous case
Suppose is a continuous, closed subset of the real numbers and we choose to specify measurable functions and numbers We consider the class of all real-valued random variables which are supported on (i.e. whose density function is zero outside of ) and which satisfy the moment conditions:
If there is a member in whose density function is positive everywhere in and if there exists a maximal entropy distribution for then its probability density has the following form:
where we assume that The constant and the Lagrange multipliers solve the constrained optimization problem with (which ensures that integrates to unity):
Using the Karush–Kuhn–Tucker conditions, it can be shown that the optimization problem has a unique solution because the objective function in the optimization is concave in
Note that when the moment constraints are equalities (instead of inequalities), that is,
then the constraint condition can be dropped, which makes optimization over the Lagrange multipliers unconstrained.
Discrete case
Suppose is a (finite or infinite) discrete subset of the reals, and that we choose to specify functions and numbers We consider the class of all discrete random variables which are supported on and which satisfy the moment conditions
If there exists a member of class which assigns positive probability to all members of and if there exists a maximum entropy distribution for then this distribution has the following shape:
where we assume that and the constants solve the constrained optimization problem with
Again as above, if the moment conditions are equalities (instead of inequalities), then the constraint condition is not present in the optimization.
Proof in the case of equality constraints
In the case of equality constraints, this theorem is proved with the calculus of variations and Lagrange multipliers. The constraints can be written as
We consider the functional
where and are the Lagrange multipliers. The zeroth constraint ensures the second axiom of probability. The other constraints are that the measurements of the function are given constants up to order . The entropy attains an extremum when the functional derivative is equal to zero:
Therefore, the extremal entropy probability distribution in this case must be of the form (),
remembering that . It can be verified that this is the maximal solution by checking that the variation around this solution is always negative.
Uniqueness of the maximum
Suppose are distributions satisfying the expectation-constraints. Letting and considering the distribution it is clear that this distribution satisfies the expectation-constraints and furthermore has as support From basic facts about entropy, it holds that Taking limits and respectively, yields
It follows that a distribution satisfying the expectation-constraints and maximising entropy must necessarily have full support — i. e. the distribution is almost everywhere strictly positive. It follows that the maximising distribution must be an internal point in the space of distributions satisfying the expectation-constraints, that is, it must be a local extreme. Thus it suffices to show that the local extreme is unique, in order to show both that the entropy-maximising distribution is unique (and this also shows that the local extreme is the global maximum).
Suppose are local extremes. Reformulating the above computations these are characterised by parameters via and similarly for where We now note a series of identities: Via 1the satisfaction of the expectation-constraints and utilising gradients / directional derivatives, one has
and similarly for Letting one obtains:
where for some Computing further one has
where is similar to the distribution above, only parameterised by Assuming that no non-trivial linear combination of the observables is almost everywhere (a.e.) constant, (which e.g. holds if the observables are independent and not a.e. constant), it holds that has non-zero variance, unless By the above equation it is thus clear, that the latter must be the case. Hence so the parameters characterising the local extrema are identical, which means that the distributions themselves are identical. Thus, the local extreme is unique and by the above discussion, the maximum is unique – provided a local extreme actually exists.
Caveats
Note that not all classes of distributions contain a maximum entropy distribution. It is possible that a class contain distributions of arbitrarily large entropy (e.g. the class of all continuous distributions on R with mean 0 but arbitrary standard deviation), or that the entropies are bounded above but there is no distribution which attains the maximal entropy. It is also possible that the expected value restrictions for the class C force the probability distribution to be zero in certain subsets of S. In that case our theorem doesn't apply, but one can work around this by shrinking the set S.
Examples
Every probability distribution is trivially a maximum entropy probability distribution under the constraint that the distribution has its own entropy. To see this, rewrite the density as and compare to the expression of the theorem above. By choosing to be the measurable function and
to be the constant, is the maximum entropy probability distribution under the constraint
.
Nontrivial examples are distributions that are subject to multiple constraints that are different from the assignment of the entropy. These are often found by starting with the same procedure and finding that can be separated into parts.
A table of examples of maximum entropy distributions is given in Lisman (1972) and Park & Bera (2009).
Uniform and piecewise uniform distributions
The uniform distribution on the interval [a,b] is the maximum entropy distribution among all continuous distributions which are supported in the interval [a, b], and thus the probability density is 0 outside of the interval. This uniform density can be related to Laplace's principle of indifference, sometimes called the principle of insufficient reason. More generally, if we are given a subdivision a=a0 < a1 < ... < ak = b of the interval [a,b] and probabilities p1,...,pk that add up to one, then we can consider the class of all continuous distributions such that
The density of the maximum entropy distribution for this class is constant on each of the intervals [aj-1,aj). The uniform distribution on the finite set {x1,...,xn} (which assigns a probability of 1/n to each of these values) is the maximum entropy distribution among all discrete distributions supported on this set.
Positive and specified mean: the exponential distribution
The exponential distribution, for which the density function is
is the maximum entropy distribution among all continuous distributions supported in [0,∞) that have a specified mean of 1/λ.
In the case of distributions supported on [0,∞), the maximum entropy distribution depends on relationships between the first and second moments. In specific cases, it may be the exponential distribution, or may be another distribution, or may be undefinable.
Specified mean and variance: the normal distribution
The normal distribution N(μ,σ2), for which the density function is
has maximum entropy among all real-valued distributions supported on (−∞,∞) with a specified variance σ2 (a particular moment). The same is true when the mean μ and the variance σ2 is specified (the first two moments), since entropy is translation invariant on (−∞,∞). Therefore, the assumption of normality imposes the minimal prior structural constraint beyond these moments. (See the differential entropy article for a derivation.)
Discrete distributions with specified mean
Among all the discrete distributions supported on the set {x1,...,xn} with a specified mean μ, the maximum entropy distribution has the following shape:
where the positive constants C and r can be determined by the requirements that the sum of all the probabilities must be 1 and the expected value must be μ.
For example, if a large number N of dice are thrown, and you are told that the sum of all the shown numbers is S. Based on this information alone, what would be a reasonable assumption for the number of dice showing 1, 2, ..., 6? This is an instance of the situation considered above, with {x1,...,x6} = {1,...,6} and μ = S/N.
Finally, among all the discrete distributions supported on the infinite set with mean μ, the maximum entropy distribution has the shape:
where again the constants C and r were determined by the requirements that the sum of all the probabilities must be 1 and the expected value must be μ. For example, in the case that xk = k, this gives
such that respective maximum entropy distribution is the geometric distribution.
Circular random variables
For a continuous random variable distributed about the unit circle, the Von Mises distribution maximizes the entropy when the real and imaginary parts of the first circular moment are specified or, equivalently, the circular mean and circular variance are specified.
When the mean and variance of the angles modulo are specified, the wrapped normal distribution maximizes the entropy.
Maximizer for specified mean, variance and skew
There exists an upper bound on the entropy of continuous random variables on with a specified mean, variance, and skew. However, there is no distribution which achieves this upper bound, because is unbounded when (see Cover & Thomas (2006: chapter 12)).
However, the maximum entropy is -achievable: a distribution's entropy can be arbitrarily close to the upper bound. Start with a normal distribution of the specified mean and variance. To introduce a positive skew, perturb the normal distribution upward by a small amount at a value many larger than the mean. The skewness, being proportional to the third moment, will be affected more than the lower order moments.
This is a special case of the general case in which the exponential of any odd-order polynomial in x will be unbounded on . For example, will likewise be unbounded on , but when the support is limited to a bounded or semi-bounded interval the upper entropy bound may be achieved (e.g. if x lies in the interval [0,∞] and λ< 0, the exponential distribution will result).
Maximizer for specified mean and deviation risk measure
Every distribution with log-concave density is a maximal entropy distribution with specified mean and deviation risk measure .
In particular, the maximal entropy distribution with specified mean and deviation is:
The normal distribution if is the standard deviation;
The Laplace distribution, if is the average absolute deviation;
The distribution with density of the form if is the standard lower semi-deviation, where are constants and the function returns only the negative values of its argument, otherwise zero.
Other examples
In the table below, each listed distribution maximizes the entropy for a particular set of functional constraints listed in the third column, and the constraint that be included in the support of the probability density, which is listed in the fourth column.
Several listed examples (Bernoulli, geometric, exponential, Laplace, Pareto) are trivially true, because their associated constraints are equivalent to the assignment of their entropy. They are included anyway because their constraint is related to a common or easily measured quantity.
For reference, is the gamma function, is the digamma function, is the beta function, and is the Euler-Mascheroni constant.
The maximum entropy principle can be used to upper bound the entropy of statistical mixtures.
See also
Exponential family
Gibbs measure
Partition function (mathematics)
Maximal entropy random walk - maximizing entropy rate for a graph
Notes
Citations
References
F. Nielsen, R. Nock (2017), MaxEnt upper bounds for the differential entropy of univariate continuous distributions, IEEE Signal Processing Letters, 24(4), 402–406
I. J. Taneja (2001), Generalized Information Measures and Their Applications. Chapter 1
Nader Ebrahimi, Ehsan S. Soofi, Refik Soyer (2008), "Multivariate maximum entropy identification, transformation, and dependence", Journal of Multivariate Analysis 99: 1217–1231,
Entropy and information
Continuous distributions
Discrete distributions
Particle statistics
Types of probability distributions | Maximum entropy probability distribution | [
"Physics",
"Mathematics"
] | 2,870 | [
"Physical quantities",
"Particle statistics",
"Entropy and information",
"Entropy",
"Statistical mechanics",
"Dynamical systems"
] |
1,813,514 | https://en.wikipedia.org/wiki/Free-fall%20time | The free-fall time is the characteristic time that would take a body to collapse under its own gravitational attraction, if no other forces existed to oppose the collapse. As such, it plays a fundamental role in setting the timescale for a wide variety of astrophysical processes—from star formation to helioseismology to supernovae—in which gravity plays a dominant role.
Derivation
Infall to a point source of gravity
It is relatively simple to derive the free-fall time by applying Kepler's Third Law of planetary motion to a degenerate elliptic orbit. Consider a point mass at distance from a point source of mass which falls radially inward to it. (Crucially, Kepler's Third Law depends only on the semi-major axis of the orbit, and does not depend on the eccentricity). A purely radial trajectory is an example of a degenerate ellipse with an eccentricity of 1 and semi-major axis . Therefore, the time it would take a body to fall inward, turn around, and return to its original position is the same as the period of a circular orbit of radius , by Kepler's Third Law:
To see that the semi-major axis is , we must examine properties of orbits as they become increasingly elliptical. Kepler's First Law states that an orbit is an ellipse with the center of mass as one focus. In the case of a very small mass falling toward a very large mass , the center of mass is within the larger mass. The focus of an ellipse is increasingly off-center with increasing ellipticity. In the limiting case of a degenerate ellipse with an eccentricity of 1, the largest diameter of the orbit extends from the initial position of the infalling object to the point source of mass . In other words, the ellipse becomes a line of length . The semi-major axis is half the width of the ellipse along the long axis, which in the degenerate case becomes .
If the free-falling body completed a full orbit, it would begin at distance from the point source mass , fall inward until it reached that point source, then return to its original position. In real systems, the point source mass isn't truly a point source and the infalling body eventually collides with some surface. Thus, it only completes half the orbit. But the orbit is symmetrical so the free-fall time is half the period.
(This formula also follows from the formula for the falling time as a function of position.)
For example, the time for an object in the orbit of the Earth around the Sun with period year to fall into the Sun if it were suddenly stopped in orbit, would be
This is about 64.6 days.
Infall of a spherically-symmetric distribution of mass
Now, consider a case where the mass is not a point mass, but is distributed in a spherically-symmetric distribution about the center, with an average mass density of ,
where the volume of a sphere is:
Let us assume that the only force acting is gravity. Then, as first demonstrated by Newton, and can easily be demonstrated using the divergence theorem, the acceleration of gravity at any given distance from the center of the sphere depends only upon the total mass contained within . The consequence of this result is that if one imagined breaking the sphere up into a series of concentric shells, each shell would collapse only subsequent to the shells interior to it, and no shells cross during collapse. As a result, the free-fall time of a test particle at can be expressed solely in terms of the total mass interior to it. In terms of the average density interior to , the free-fall time is
where the latter is in SI units.
This result is exactly the same as from the previous section when .
Applications
The free-fall time is a very useful estimate of the relevant timescale for a number of astrophysical processes. To get a sense of its application, we may write
Here we have estimated the numerical value for the free-fall time as roughly 35 minutes for a body of mean density 1 g/cm3.
Comparison
For an object falling from infinity in a capture orbit, the time it takes from a given position to fall to the central point mass is the same as the free-fall time, except for a constant
References
Galactic dynamics Binney, James; Tremaine, Scott. Princeton University Press, 1987.
Celestial mechanics
Falling | Free-fall time | [
"Physics"
] | 905 | [
"Celestial mechanics",
"Classical mechanics",
"Astrophysics"
] |
1,813,863 | https://en.wikipedia.org/wiki/Trigonelline | Trigonelline is an alkaloid with chemical formula . It is a zwitterion formed by the methylation of the nitrogen atom of niacin (vitamin B3). Trigonelline is a product of niacin metabolism that is excreted in the urine of mammals.
Trigonelline occurs in many plants. It has been isolated from the Japanese radish (Raphanus sativus cv. Sakurajima Daikon), fenugreek seeds (Trigonella foenum-graecum, hence the name), garden peas, hemp seed, oats, potatoes, Stachys species, dahlia, Strophanthus species, and Dichapetalum cymosum. Trigonelline is also found in coffee. Higher levels of trigonelline are found in arabica coffee.
Holtz, Kutscher, and Theilmann have recorded its presence in a number of animals.
Chemistry
Trigonelline crystallizes as a monohydrate from alcohol in hygroscopic prisms (m.p. 130 °C or 218 °C [dry, dec.]). It is readily soluble in water or warm alcohol, less so in cold alcohol, and slightly so in chloroform or ether. The salts crystallize well, the monohydrochloride, in leaflets, sparingly soluble in dry alcohol. The picrate forms shining prisms (m.p. 198−200 °C) soluble in water but sparingly soluble in dry alcohol or ether. The alkaloid forms several aurichlorides: the normal salt, B•HCl•AuCl3, is precipitated when excess of gold chloride is added to the hydrochloride, and, after crystallization from dilute hydrochloric acid containing some gold chloride, has m.p. 198 °C. Crystallized from water or very dilute hydrochloric acid, slender needles of B4•3 HAuCl4 (m.p. 186 °C) are obtained.
When trigonelline is heated in closed tubes with barium hydroxide at 120 °C, it gives rise to methylamine, and, if treated similarly with hydrochloric acid at 260 °C creates chloromethane and nicotinic acid (a form of vitamin B3). Trigonelline is a methyl betaine of nicotinic acid.
References
Alkaloids
Quaternary ammonium compounds
Nicotinates
Coffee chemistry
Zwitterions | Trigonelline | [
"Physics",
"Chemistry"
] | 526 | [
"Biomolecules by chemical classification",
"Matter",
"Coffee chemistry",
"Natural products",
"Organic compounds",
"Zwitterions",
"Ions",
"Food chemistry",
"Alkaloids"
] |
1,814,410 | https://en.wikipedia.org/wiki/Coulomb%20collision | A Coulomb collision is a binary elastic collision between two charged particles interacting through their own electric field. As with any inverse-square law, the resulting trajectories of the colliding particles is a hyperbolic Keplerian orbit. This type of collision is common in plasmas where the typical kinetic energy of the particles is too large to produce a significant deviation from the initial trajectories of the colliding particles, and the cumulative effect of many collisions is considered instead. The importance of Coulomb collisions was first pointed out by Lev Landau in 1936, who also derived the corresponding kinetic equation which is known as the Landau kinetic equation.
Simplified mathematical treatment for plasmas
In a plasma, a Coulomb collision rarely results in a large deflection. The cumulative effect of the many small angle collisions, however, is often larger than the effect of the few large angle collisions that occur, so it is instructive to consider the collision dynamics in the limit of small deflections.
We can consider an electron of charge and mass passing a stationary ion of charge and much larger mass at a distance with a speed . The perpendicular force is at the closest approach and the duration of the encounter is about . The product of these expressions divided by the mass is the change in perpendicular velocity:
Note that the deflection angle is proportional to . Fast particles are "slippery" and thus dominate many transport processes. The efficiency of velocity-matched interactions is also the reason that fusion products tend to heat the electrons rather than (as would be desirable) the ions. If an electric field is present, the faster electrons feel less drag and become even faster in a "run-away" process.
In passing through a field of ions with density , an electron will have many such encounters simultaneously, with various impact parameters (distance to the ion) and directions. The cumulative effect can be described as a diffusion of the perpendicular momentum. The corresponding diffusion constant is found by integrating the squares of the individual changes in momentum. The rate of collisions with impact parameter between and is , so the diffusion constant is given by
Obviously the integral diverges toward both small and large impact parameters. The divergence at small impact parameters is clearly unphysical since under the assumptions used here, the final perpendicular momentum cannot take on a value higher than the initial momentum. Setting the above estimate for equal to , we find the lower cut-off to the impact parameter to be about
We can also use as an estimate of the cross section for large-angle collisions. Under some conditions there is a more stringent lower limit due to quantum mechanics, namely the de Broglie wavelength of the electron, where is the Planck constant.
At large impact parameters, the charge of the ion is shielded by the tendency of electrons to cluster in the neighborhood of the ion and other ions to avoid it. The upper cut-off to the impact parameter should thus be approximately equal to the Debye length:
Coulomb logarithm
The integral of thus yields the logarithm of the ratio of the upper and lower cut-offs. This number is known as the Coulomb logarithm and is designated by either or . It is the factor by which small-angle collisions are more effective than large-angle collisions. The Coulomb logarithm was introduced independently by Lev Landau in 1936 and Subrahmanyan Chandrasekhar in 1943. For many plasmas of interest it takes on values between and . (For convenient formulas, see pages 34 and 35 of the NRL Plasma formulary.) The limits of the impact parameter integral are not sharp, but are uncertain by factors on the order of unity, leading to theoretical uncertainties on the order of . For this reason it is often justified to simply take the convenient choice . The analysis here yields the scalings and orders of magnitude.
Mathematical treatment for plasmas accounting for all impact parameters
An N-body treatment accounting for all impact parameters can be performed by taking into account a few simple facts. The main two ones are: (i) The above change in perpendicular velocity is the lowest order approximation in 1/b of a full Rutherford deflection. Therefore, the above perturbative theory can also be done by using this full deflection. This makes the calculation correct up to the smallest impact parameters where this full deflection must be used. (ii) The effect of Debye shielding for large impact parameters can be accommodated by using a Debye-shielded Coulomb potential (Screening effect Debye length). This cancels the above divergence at large impact parameters. The above Coulomb logarithm turns out to be modified by a constant of order unity.
History
In the 1950s, transport due to collisions in non-magnetized plasmas was simultaneously studied by two groups at University of California, Berkeley's Radiation Laboratory. They quoted each other’s results in their respective papers. The first reference deals with the mean-field part of the interaction by using perturbation theory in electric field amplitude. Within the same approximations, a more elegant derivation of the collisional transport coefficients was provided, by using the Balescu–Lenard equation (see Sec. 8.4 of and Secs. 7.3 and 7.4 of ). The second reference uses the Rutherford picture of two-body collisions. The calculation of the first reference is correct for impact parameters much larger than the interparticle distance, while those of the second one work in the opposite case. Both calculations are extended to the full range of impact parameters by introducing each a single ad hoc cutoff, and not two as in the above simplified mathematical treatment, but the transport coefficients depend only logarithmically thereon; both results agree and yield the above expression for the diffusion constant.
See also
Rutherford scattering
References
External links
Effects of Ionization [ApJ paper] by Gordon Emslie
NRL Plasma Formulary 2013 ed.
Plasma phenomena
Scattering | Coulomb collision | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,211 | [
"Physical phenomena",
"Plasma physics",
"Plasma phenomena",
"Scattering",
"Particle physics",
"Condensed matter physics",
"Nuclear physics"
] |
1,814,652 | https://en.wikipedia.org/wiki/Frobenius%20algebra | In mathematics, especially in the fields of representation theory and module theory, a Frobenius algebra is a finite-dimensional unital associative algebra with a special kind of bilinear form which gives the algebras particularly nice duality theories. Frobenius algebras began to be studied in the 1930s by Richard Brauer and Cecil Nesbitt and were named after Georg Frobenius. Tadashi Nakayama discovered the beginnings of a rich duality theory , . Jean Dieudonné used this to characterize Frobenius algebras . Frobenius algebras were generalized to quasi-Frobenius rings, those Noetherian rings whose right regular representation is injective. In recent times, interest has been renewed in Frobenius algebras due to connections to topological quantum field theory.
Definition
A finite-dimensional, unital, associative algebra A defined over a field k is said to be a Frobenius algebra if A is equipped with a nondegenerate bilinear form that satisfies the following equation: . This bilinear form is called the Frobenius form of the algebra.
Equivalently, one may equip A with a linear functional such that the kernel of λ contains no nonzero left ideal of A.
A Frobenius algebra is called symmetric if σ is symmetric, or equivalently λ satisfies .
There is also a different, mostly unrelated notion of the symmetric algebra of a vector space.
Nakayama automorphism
For a Frobenius algebra A with σ as above, the automorphism ν of A such that is the Nakayama automorphism associated to A and σ.
Examples
Any matrix algebra defined over a field k is a Frobenius algebra with Frobenius form σ(a,b)=tr(a·b) where tr denotes the trace.
Any finite-dimensional unital associative algebra A has a natural homomorphism to its own endomorphism ring End(A). A bilinear form can be defined on A in the sense of the previous example. If this bilinear form is nondegenerate, then it equips A with the structure of a Frobenius algebra.
Every group ring k[G] of a finite group G over a field k is a symmetric Frobenius algebra, with Frobenius form σ(a,b) given by the coefficient of the identity element in a·b.
For a field k, the four-dimensional k-algebra k[x,y]/ (x2, y2) is a Frobenius algebra. This follows from the characterization of commutative local Frobenius rings below, since this ring is a local ring with its maximal ideal generated by x and y, and unique minimal ideal generated by xy.
For a field k, the three-dimensional k-algebra A=k[x,y]/ (x, y)2 is not a Frobenius algebra. The A homomorphism from xA into A induced by x ↦ y cannot be extended to an A homomorphism from A into A, showing that the ring is not self-injective, thus not Frobenius.
Any finite-dimensional Hopf algebra, by a 1969 theorem of Larson-Sweedler on Hopf modules and integrals.
Properties
The direct product and tensor product of Frobenius algebras are Frobenius algebras.
A finite-dimensional commutative local algebra over a field is Frobenius if and only if the right regular module is injective, if and only if the algebra has a unique minimal ideal.
Commutative, local Frobenius algebras are precisely the zero-dimensional local Gorenstein rings containing their residue field and finite-dimensional over it.
Frobenius algebras are quasi-Frobenius rings, and in particular, they are left and right Artinian and left and right self-injective.
For a field k, a finite-dimensional, unital, associative algebra is Frobenius if and only if the injective right A-module Homk(A,k) is isomorphic to the right regular representation of A.
For an infinite field k, a finite-dimensional, unital, associative k-algebra is a Frobenius algebra if it has only finitely many minimal right ideals.
If F is a finite-dimensional extension field of k, then a finite-dimensional F-algebra is naturally a finite-dimensional k-algebra via restriction of scalars, and is a Frobenius F-algebra if and only if it is a Frobenius k-algebra. In other words, the Frobenius property does not depend on the field, as long as the algebra remains a finite-dimensional algebra.
Similarly, if F is a finite-dimensional extension field of k, then every k-algebra A gives rise naturally to an F algebra, F ⊗k A, and A is a Frobenius k-algebra if and only if F ⊗k A is a Frobenius F-algebra.
Amongst those finite-dimensional, unital, associative algebras whose right regular representation is injective, the Frobenius algebras A are precisely those whose simple modules M have the same dimension as their A-duals, HomA(M,A). Amongst these algebras, the A-duals of simple modules are always simple.
A finite-dimensional bi-Frobenius algebra or strict double Frobenius algebra is a k-vector-space A with two multiplication structures as unital Frobenius algebras (A, • , 1) and (A, , ): there must be multiplicative homomorphisms and of A into k with and non-degenerate, and a k-isomorphism S of A onto itself which is an anti-automorphism for both structures, such that This is the case precisely when A is a finite-dimensional Hopf algebra over k and S is its antipode. The group algebra of a finite group gives an example.
Category-theoretical definition
In category theory, the notion of Frobenius object is an abstract definition of a Frobenius algebra in a category. A Frobenius object in a monoidal category consists of an object A of C together with four morphisms
such that
is a monoid object in C,
is a comonoid object in C,
the diagrams
and
commute (for simplicity the diagrams are given here in the case where the monoidal category C is strict) and are known as Frobenius conditions.
More compactly, a Frobenius algebra in C is a so-called Frobenius monoidal functor A:1 → C, where 1 is the category consisting of one object and one arrow.
A Frobenius algebra is called isometric or special if .
Applications
Frobenius algebras originally were studied as part of an investigation into the representation theory of finite groups, and have contributed to the study of number theory, algebraic geometry, and combinatorics. They have been used to study Hopf algebras, coding theory, and cohomology rings of compact oriented manifolds.
Topological quantum field theories
Recently, it has been seen that they play an important role in the algebraic treatment and axiomatic foundation of topological quantum field theory. A commutative Frobenius algebra determines uniquely (up to isomorphism) a (1+1)-dimensional TQFT. More precisely, the category of commutative Frobenius -algebras is equivalent to the category of symmetric strong monoidal functors from - (the category of 2-dimensional cobordisms between 1-dimensional manifolds) to (the category of vector spaces over ).
The correspondence between TQFTs and Frobenius algebras is given as follows:
1-dimensional manifolds are disjoint unions of circles: a TQFT associates a vector space with a circle, and the tensor product of vector spaces with a disjoint union of circles,
a TQFT associates (functorially) to each cobordism between manifolds a map between vector spaces,
the map associated with a pair of pants (a cobordism between 1 circle and 2 circles) gives a product map or a coproduct map , depending on how the boundary components are grouped – which is commutative or cocommutative, and
the map associated with a disk gives a counit (trace) or unit (scalars), depending on grouping of boundary.
This relation between Frobenius algebras and (1+1)-dimensional TQFTs can be used to explain Khovanov's categorification of the Jones polynomial.
Generalizations
Frobenius extensions
Let B be a subring sharing the identity element of a unital associative ring A. This is also known as ring extension A | B. Such a ring extension is called Frobenius if
There is a linear mapping E: A → B satisfying the bimodule condition E(bac) = bE(a)c for all b,c ∈ B and a ∈ A.
There are elements in A denoted and such that for all a ∈ A we have:
The map E is sometimes referred to as a Frobenius homomorphism and the elements as dual bases. (As an exercise it is possible to give an equivalent definition of Frobenius extension as a Frobenius algebra-coalgebra object in the category of B-B-bimodules, where the equations just given become the counit equations for the counit E.)
For example, a Frobenius algebra A over a commutative ring K, with associative nondegenerate bilinear form (-,-) and projective K-bases is a Frobenius extension A | K with E(a) = (a,1). Other examples of Frobenius extensions are pairs of group algebras associated to a subgroup of finite index, Hopf subalgebras of a semisimple Hopf algebra, Galois extensions and certain von Neumann algebra subfactors of finite index. Another source of examples of Frobenius extensions (and twisted versions) are certain subalgebra pairs of Frobenius algebras, where the subalgebra is stabilized by the symmetrizing automorphism of the overalgebra.
The details of the group ring example are the following application of elementary notions in group theory. Let G be a group and H a subgroup of finite index n in G; let g1, ..., gn. be left coset representatives, so that G is a disjoint union of the cosets g1H, ..., gnH. Over any commutative base ring k define the group algebras A = k[G] and B = k[H], so B is a subalgebra of A. Define a Frobenius homomorphism E: A → B by letting E(h) = h for all h in H, and E(g) = 0 for g not in H : extend this linearly from the basis group elements to all of A, so one obtains the B-B-bimodule projection
(The orthonormality condition follows.) The dual base is given by , since
The other dual base equation may be derived from the observation that G is also a disjoint union of the right cosets .
Also Hopf-Galois extensions are Frobenius extensions by a theorem of Kreimer and Takeuchi from 1989. A simple example of this is a finite group G acting by automorphisms on an algebra A with subalgebra of invariants:
By DeMeyer's criterion A is G-Galois over B if there are elements in A satisfying:
whence also
Then A is a Frobenius extension of B with E: A → B defined by
which satisfies
(Furthermore, an example of a separable algebra extension since is a separability element satisfying ea = ae for all a in A as well as . Also an example of a depth two subring (B in A) since
where
for each g in G and a in A.)
Frobenius extensions have a well-developed theory of induced representations investigated in papers by Kasch and Pareigis, Nakayama and Tzuzuku in the 1950s and 1960s. For example, for each B-module M, the induced module A ⊗B M (if M is a left module) and co-induced module HomB(A, M) are naturally isomorphic as A-modules (as an exercise one defines the isomorphism given E and dual bases). The endomorphism ring theorem of Kasch from 1960 states that if A | B is a Frobenius extension, then so is A → End(AB) where the mapping is given by a ↦ λa(x) and λa(x) = ax for each a,x ∈ A. Endomorphism ring theorems and converses were investigated later by Mueller, Morita, Onodera and others.
Frobenius adjunctions
As already hinted at in the previous paragraph, Frobenius extensions have an equivalent categorical formulation.
Namely, given a ring extension , the induced induction functor from the category of, say, left S-modules to the category of left R-modules has both a left and a right adjoint, called co-restriction and restriction, respectively.
The ring extension is then called Frobenius if and only if the left and the right adjoint are naturally isomorphic.
This leads to the obvious abstraction to ordinary category theory:
An adjunction is called a Frobenius adjunction iff also .
A functor F is a Frobenius functor if it is part of a Frobenius adjunction, i.e. if it has isomorphic left and right adjoints.
See also
Bialgebra
Frobenius category
Frobenius norm
Frobenius inner product
Hopf algebra
Quasi-Frobenius Lie algebra
Dagger compact category
References
External links
Algebras
Module theory
Monoidal categories | Frobenius algebra | [
"Mathematics"
] | 2,933 | [
"Mathematical structures",
"Algebras",
"Monoidal categories",
"Fields of abstract algebra",
"Category theory",
"Module theory",
"Algebraic structures"
] |
1,814,905 | https://en.wikipedia.org/wiki/Balancing%20lake | A balancing lake (also flood basin ) is a term used in the U.K. describing a retention basin used to control flooding by temporarily storing flood waters. The term balancing pond is also used, though typically for smaller storage facilities for streams and brooks.
In open countryside, heavy rainfall soaks into the ground and is released relatively slowly into watercourses (ditches, streams, rivers). In an urban area, the extent of hard surfaces (roofs, roads) means that the rainfall is dumped immediately into the drainage system. If left unchecked, this has the potential to cause flooding downstream. The function of a balancing lake as part of a sustainable urban drainage scheme is to contain this surge and release it slowly. Failure to do this, especially in older settlements without separate storm sewers and foul sewers, can cause serious pollution as well as flooding.
Engineering
At its simplest, a balancing lake can be constructed by creating a dam across a drain or stream at a convenient valley, with a restricted diameter outlet pipe through the dam. Normal flows pass through the pipe without restriction, while heavy flows create a backup, causing water levels behind the dam to rise. Over the following few days, the level subsides. This is often enough for a small housing development.
More advanced systems are computer-controlled such that the entire flow of a river can be diverted into a holding lake, perhaps to reduce the impact of a large scale rainstorm in the catchment on communities downriver.
For aesthetic and safety reasons, the system can be designed so that there is a permanent lake. A lake with an equivalent area of 1,000 by 1,000 metres will hold a million cubic metres of water for each metre of depth. Typically such a lake would have an outer earth bank of 1 metre, then a leisure path, then a 10 cm inner bank to the steady-state level.
Other benefits
A permanent lake can provide useful recreation facilities such as sailing, windsurfing, or of wildlife. Water sports and wildlife habitats do not mix well, though a scheme can have both in linked basins where the recreational basin fills first and the wildlife basin is only used in exceptional conditions.
A recreational use facility can have relatively steep banks (perhaps with a footpath inside the bank next to the permanent lake for aesthetics and safety). The water level can rise substantially without a significant increase in overall area.
A basin that is intended for use by wildlife and for visual amenity needs to be relatively shallow for maximum plant life. It must be designed with the assumption that it will be invoked very rarely, especially during the nesting season.
Case study: Willen Lake, Milton Keynes
Willen Lake () is one of the largest (400,000 m2) purpose-built stormwater balancing lakes in the UK. The lake is designed to take surface run-off from Milton Keynes, the largest of a number designed to do so. The lake has capacity for an additional level increase of 1.3 metres, equivalent to a once in 200 years event. Unlike most of the rest of the UK, the city has separate storm and foul sewers, so sewage pollution is not a significant problem.
Additionally, there are facilities to prevent accidental oil spills and the like from reaching the lake. As well as local storm drains, the lake's primary purpose is to intercept the river Ouzel, a tributary of the river Great Ouse. The catchment area is Oxford Clay that tends to get saturated easily, so field run-off has always been a problem.
The South Basin is designed for recreational use, mainly dinghy sailing and wind surfing, with a circumference path and banks as described above. It is linked to the North (Wildlife) Basin and can be drawn on to manage the level of the latter more finely. The North Basin has a large, undisturbed, central island. The extensive shallows support a good crop of aquatic plants and invertebrates. Very quickly, it became a key wildfowl site.
In winter, it attracts up to 2,500 wild birds, with a wide variety of migrating waders in spring and autumn. Common tern, tufted duck, ringed and little ringed plover, common redshank and northern lapwing. Canada geese have become naturalised and they are permanent residents. Both basins have deep ponds to maintain the fish population during droughts. The lake is managed as a public open space, receiving up to a million visits each year.
See also
Stormwater
Surface runoff
Urban runoff
Sustainable urban drainage systems
Retention basin
References
Further reading
Wetlands, Industry & Wildlife: A manual of principles and practices. (1994) (The Wildfowl & Wetlands Trust, UK). Chapter 15.
Hydrology
Hydraulic engineering
Stormwater management | Balancing lake | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 960 | [
"Hydrology",
"Water treatment",
"Stormwater management",
"Water pollution",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Environmental engineering",
"Hydraulic engineering"
] |
1,816,139 | https://en.wikipedia.org/wiki/Target%20costing | Target costing is an approach to determine a product's life-cycle cost which should be sufficient to develop specified functionality and quality, while ensuring its desired profit. It involves setting a target cost by subtracting a desired profit margin from a competitive market price. A target cost is the maximum amount of cost that can be incurred on a product, however, the firm can still earn the required profit margin from that product at a particular selling price. Target costing decomposes the target cost from product level to component level. Through this decomposition, target costing spreads the competitive pressure faced by the company to product's designers and suppliers. Target costing consists of cost planning in the design phase of production as well as cost control throughout the resulting product life cycle. The cardinal rule of target costing is to never exceed the target cost. However, the focus of target costing is not to minimize costs, but to achieve a desired level of cost reduction determined by the target costing process.
Definition
Target costing is defined as "a disciplined process for determining and achieving a full-stream cost at which a proposed product with specified functionality, performance, and quality must be produced in order to generate the desired profitability at the product’s anticipated selling price over a specified period of time in the future." This definition encompasses the principal concepts: products should be based on an accurate assessment of the wants and needs of customers in different market segments, and cost targets should be what result after a sustainable profit margin is subtracted from what customers are willing to pay at the time of product introduction and afterwards.
The fundamental objective of target costing is to manage the business to be profitable in a highly competitive marketplace. In effect, target costing is a proactive cost planning, cost management, and cost reduction practice whereby costs are planned and managed out of a product and business early in the design and development cycle, rather than during the later stages of product development and production.
History
Target costing was developed independently in both USA and Japan in different time periods. Target costing was adopted earlier by American companies to reduce cost and improve productivity, such as Ford Motor from 1900s, American Motors from 1950s-1960s. Although the ideas of target costing were also applied by a number of other American companies including Boeing, Caterpillar, Northern Telecom, few of them apply target costing as comprehensively and intensively as top Japanese companies such as Nissan, Toyota, Nippondenso. Target costing emerged from Japan from 1960s to early 1970s with the particular effort of Japanese automobile industry, including Toyota and Nissan. It did not receive global attention until late 1980s to 1990s when some authors such as Monden (1992), Sakurai (1989), Tanaka (1993), and Cooper (1992) described the way that Japanese companies applied target costing to thrive in their business (IMA 1994). With superior implementation systems, Japanese manufacturers are more successful than the American companies in developing target costing. Traditional cost-plus pricing strategy has been impeding the productivity and profitability for a long time. As a new strategy, target costing is replacing traditional cost-plus pricing strategy by maximizing customer satisfaction by accepted level of quality and functionality while minimizing costs.
Process of target costing
The process of target costing can be divided into three sections: the first section involves in market-driven target costing, which focuses on studying market conditions to identify a product's allowable cost in order to meet the company's long-term profit at expected selling price; the second section involves performing cost reduction strategies with the product designer's effort and creativity to identify the product-level target cost; the third section is component-level target cost which decomposes the production cost to functional and component levels to transmit cost responsibility to suppliers.
Market-driven target costing
Market driven target costing is the first section in the target costing process which focuses on studying market conditions and determining the company's profit margin in order to identify the allowable cost of a product. Market driven costing can go through 5 steps including: establish company's long-term sales and profit objective; develop the mix of products; identify target selling price for each product; identify profit margin for each product; and calculate allowable cost of each product.
Company's long-term sales and profit objectives are developed from an extensive analysis of relevant information relating to customers, market and products. Only realistic plans are accepted to proceed to the next step. Product mix is designed carefully to ensure that it satisfies many customers, but also does not contain too many products to confuse customers. Company may use simulation to explore the impact of overall profit objective to different product mixes and determine the most feasible product mix. Target selling price, target profit margin and allowable cost are identified for each product. Target selling price need to consider to the expected market condition at the time launching the product. Internal factors such as product's functionality and profit objective, and external factors such as company's image or expected price of competitive products will influence target selling price. Company's long-term profit plan and life-cycle cost are considered when determining target profit margin. Firms might set up target profit margin based on either actual profit margin of previous products or target profit margin of product line. Simulation for overall group profitability can help to make sure achieving group target. Subtracting target profit margin from target selling price results in allowable cost for each product. Allowable cost is the amount that can be spent on a product to ensure its profit target is met if it is sold at its target price. It is the signal about the magnitude of cost saving that team need to achieve.
Product-level target costing
Following the completion of market-driven costing, the next task of the target costing process is product-level target costing. Product-level target costing concentrates on designing products that satisfy the company's customers at the allowable cost. To achieve this goal, product-level target costing is typically divided into three steps as shown below.
The first step is to set a product-level target cost. Since the allowable cost is simply obtained from external conditions without considering the design capabilities of the company as well as the realistic cost for manufacturing, it may not be always achievable in practice. Thus, it is necessary to adjust the unachievable allowable cost to an achievable target cost that the cost increase should be reduced with great effort. The second step is to discipline this target cost process, including monitoring the relationship between the target cost and the estimated product cost at any point during the design process, applying the cardinal rule so that the total target costs at the component-level does not exceed the target cost of the product, and allowing exceptions for products violating the cardinal rule. For a product exception to the cardinal rule, two analyses are often performed after the launch of the product. One involves reviewing the design process to find out why the target cost was unachieved. The other is an immediate effort to reduce the excessive cost to ensure that the period of violation is as short as possible. Once the target cost-reduction objective is identified, the product-level target costing comes to the final step, finding ways to achieve it. Engineering methods such as value engineering (VE), design for manufacture and assembly (DFMA), and quality function deployment (QFD) are commonly adopted in this step.
Target costing and value engineering
Value engineering (VE), also known as value analysis (VA), plays a crucial role in the target costing process, particularly at the product level and the component level. Among the three aforementioned methods in achieving the target cost, VE is the most critical one because not only does it attempt to reduce costs, but also aims to improve the functionality and quality of products. There are a variety of practical VE strategies, including zero-look, first-look and second-look VE approaches, as well as teardown approaches.
Regarding the complexity of problems in the real world, implementing the target costing process often relies on the computer simulation to reproduce stochastic elements. For example, many firms use simulation to study the complex relationship between selling prices and profit margins, the impact of individual product decisions on overall group profitability, the right mix of products to enhance overall profit, or other economic modeling to overcome organizational inertia by getting the most productive reasoning. In addition, simulation helps estimate results rapidly for dynamic process changes.
Factors affecting target costing
The factors influencing the target costing process is broadly categorized based on how a company's strategy for a product's quality, functionality and price change over time. However, some factors play a specific role based on what drives a company's approach to target costing.
Factors influencing market-driven costing
Intensity of competition and nature of the customer affect market-driven costing. Competitors introducing similar products has been shown to drive rival companies to expend energy on implementing target costing systems such as in the case of Toyota and Nissan or Apple and Google. The costing process is also affected by the level of customer sophistication, changing requirements and the degree to which their future requirements are known. The automotive and camera industry are prime examples for how customers affect target costing based on their exact requirements.
Factors influencing product-level costing
Product strategy and product characteristics affect product-level target costing. Characteristics of product strategy such as number of products in line, rate of redesign operations and level of innovation are shown to have an effect. Higher number of products has a direct correlation with the benefits of target costing. Frequent redesigns lead to the introduction of new products that have created better benefits to target costing. The value of historical information reduces with greater innovation, thereby, reducing the benefits of product level target costing.
The degree of complexity of the product, level of investments required and the duration of product development process make up the factors that affect the target costing process based on product characteristics. Product viability is determined by the aforementioned factors. In turn, the target costing process is also modified to suit the different degrees of complexity required.
Factors influencing component-level costing
Supplier-Base strategy is the main factor that determines component-level target costing because it is known to play a key role in the details a firm has about its supplier capabilities. There are three characteristics that make up the supplier-base strategy, including the degree of horizontal integration, power over suppliers and nature of supplier relations. Horizontal integration captures the fraction of product costs sourced externally. Cost pressures on suppliers can drive target costing if the buying power of firms is high enough. In turn, this may lead to better benefits. More cooperative supplier relations have been shown to increase mutual benefits in terms of target costs particularly at a component level.
Applications
Aside from the application of target costing in the field of manufacturing, target costing are also widely used in the following areas.
Energy
An Energy Retrofit Loan Analysis Model has been developed using a Monte Carlo (MC) method for target costing in Energy Efficient buildings and construction. MC method has been shown to be effective in determining the impact of financial uncertainties in project performance.
Target Value Design Decision Making Process (TVD-DMP) groups a set of energy efficiency methods at different optimization levels to evaluate costs and uncertainties involved in the energy efficiency process. Some major design parameters are specified using this methods including Facility Operation Schedule, Orientation, Plug load, HVAC and lighting systems.
The entire process consists of three phases: initiation, definition and alignment. Initiation stage involves developing a business case for energy efficiency using target value design (TVD) training, organization and compensation. The definition process involves defining and validating the case by tools such as values analysis and bench marking processes to determine the allowable costs. By setting targets and designing the design process to align with those targets, TVD-DMP has been shown to achieve a high level of collaboration needed for energy efficiency investments. This is done by using risk analysis tools, pull planning and rapid estimating processes.
Healthcare
Target costing and target value design have applications in building healthcare facilities including critical components such as Neonatal Intensive Care Units (NICUs). The process is influenced by unit locations, degree of comfort, number of patients per room, type of supply location and access to nature. According to National Vital Statistics Reports, 12.18% of 2009 births were premature and the cost per infant was $51,600. This led to opportunities for NICUs to implement target value design for deciding whether to build a single-family room or more open-bay NICUs. This was achieved using set-based design analysis which challenges the designer to generate multiple alternatives for the same functionality. Designs are evaluated keeping in mind the requirements of the various stakeholders in the NICU including nurses, doctors, family members and administrators. Unlike linear point-based design, set-based design narrows options to the optimal one by eliminating alternatives simultaneously defined by user constraints.
Construction
Jacomit et al. (2008) noted that about 15% of construction projects in Japan adopted target costing for their cost planning and management. In the U.S., target costing research has been carried out within the framework of lean construction as target value design (TVD) method and have been disseminated widely over construction industry in recent years. Research has proven that if being applied systematically, TVD can deliver a significant improvement in project performance with average reduction of 15% in comparison with market cost.
TVD in construction project considers the final cost of project as a design parameter, similar to the capacity and aesthetics requirements for the project. TVD requires the project team to develop a target cost from the beginning. The project team is expected not to design exceeding the target cost without the owner's approval, and must use different skills to maintain this target cost. In some cases, the cost can increase but the project team must commit to decrease and must try their best to decrease without impacting on other functions of the project.
In Scotland, guidance on the use of pain share/pain gain arrangements and target cost contracting was issued to public sector construction procurers in 2017. This guidance refers to reimbursement to contractors calculated in two stages:
an initial target cost and the percentage basis for the gain share/pain share calculations are agreed, and during the project the contractor is paid on a cost reimbursement basis
on conclusion of the project, the final target cost is compared to the actual cost. The final target cost will reflect the initial target cost and any employer changes and employer risk events which have occurred during the construction period, and changes should be recorded as they occur. If the actual cost is less than the target cost, the contractor is rewarded with a share of the "gain" according to the pre-determined percentage, and if the actual cost is greater than the target cost, the "pain" is likewise shared between the employer and the contractor.
The guidance stresses the importance of "good faith and reasonableness" in calculating the target cost, but also notes the risk which arises when target cost arrangements are used without fully understanding how they are to operate.
See also
Design-to-cost
References
External links
Management Accounting Quarterly 12 Winter 2003
Japanese Target Costing
DRM Associates Target Costing
Implementing Target Costing
Management accounting
Lean manufacturing
Design for X | Target costing | [
"Engineering"
] | 3,060 | [
"Design",
"Lean manufacturing",
"Design for X"
] |
12,674,074 | https://en.wikipedia.org/wiki/Arrhenius%20plot | In chemical kinetics, an Arrhenius plot displays the logarithm of a reaction rate constant, ordinate axis) plotted against reciprocal of the temperature abscissa). Arrhenius plots are often used to analyze the effect of temperature on the rates of chemical reactions. For a single rate-limited thermally activated process, an Arrhenius plot gives a straight line, from which the activation energy and the pre-exponential factor can both be determined.
The Arrhenius equation can be given in the form:
where:
= rate constant
= pre-exponential factor
= (molar) activation energy
= gas constant, (, where is the Avogadro constant).
= activation energy (for a single reaction event)
= Boltzmann constant
= absolute temperature
The only difference between the two forms of the expression is the quantity used for the activation energy: the former would have the unit joule/mole, which is common in chemistry, while the latter would have the unit joule and would be for one molecular reaction event, which is common in physics. The different units are accounted for in using either the gas constant or the Boltzmann constant .
Taking the natural logarithm of the former equation gives:
When plotted in the manner described above, the value of the y-intercept (at ) will correspond to , and the slope of the line will be equal to . The values of y-intercept and slope can be determined from the experimental points using simple linear regression with a spreadsheet.
The pre-exponential factor, , is an empirical constant of proportionality which has been estimated by various theories which take into account factors such as the frequency of collision between reacting particles, their relative orientation, and the entropy of activation.
The expression represents the fraction of the molecules present in a gas which have energies equal to or in excess of activation energy at a particular temperature. In almost all practical cases, , so that this fraction is very small and increases rapidly with In consequence, the reaction rate constant increases rapidly with temperature , as shown in the direct plot of against . (Mathematically, at very high temperatures so that , would level off and approach as a limit, but this case does not occur under practical conditions.)
Worked example
Considering as example the decomposition of nitrogen dioxide into nitrogen monoxide and molecular oxygen:
Based on the red "line of best fit" plotted in the graph given above:
Points read from graph:
Slope of red line = (4.1 − 2.2) / (0.0015 − 0.00165) = −12,667
Intercept [y-value at x = 0] of red line = 4.1 + (0.0015 × 12667) = 23.1
Inserting these values into the form above:
yields:
as shown in the plot at the right.
for:
k in 10−4 cm3 mol−1 s−1
T in K
Substituting for the quotient in the exponent of :
where the approximate value for R is 8.31446 J K−1 mol−1
The activation energy of this reaction from these data is then:
See also
Arrhenius equation
Eyring equation
Polymer degradation
References
Chemical kinetics
Plots (graphics) | Arrhenius plot | [
"Chemistry"
] | 661 | [
"Chemical reaction engineering",
"Chemical kinetics"
] |
12,677,528 | https://en.wikipedia.org/wiki/Liouville%27s%20equation | For Liouville's equation in dynamical systems, see Liouville's theorem (Hamiltonian).
For Liouville's equation in quantum mechanics, see Von Neumann equation.
For Liouville's equation in Euclidean space, see Liouville–Bratu–Gelfand equation.
In differential geometry, Liouville's equation, named after Joseph Liouville, is the nonlinear partial differential equation satisfied by the conformal factor of a metric on a surface of constant Gaussian curvature :
where is the flat Laplace operator
Liouville's equation appears in the study of isothermal coordinates in differential geometry: the independent variables are the coordinates, while can be described as the conformal factor with respect to the flat metric. Occasionally it is the square that is referred to as the conformal factor, instead of itself.
Liouville's equation was also taken as an example by David Hilbert in the formulation of his nineteenth problem.
Other common forms of Liouville's equation
By using the change of variables , another commonly found form of Liouville's equation is obtained:
Other two forms of the equation, commonly found in the literature, are obtained by using the slight variant of the previous change of variables and Wirtinger calculus:
Note that it is exactly in the first one of the preceding two forms that Liouville's equation was cited by David Hilbert in the formulation of his nineteenth problem.
A formulation using the Laplace–Beltrami operator
In a more invariant fashion, the equation can be written in terms of the intrinsic Laplace–Beltrami operator
as follows:
Properties
Relation to Gauss–Codazzi equations
Liouville's equation is equivalent to the Gauss–Codazzi equations for minimal immersions into the 3-space, when the metric is written in isothermal coordinates such that the Hopf differential is .
General solution of the equation
In a simply connected domain , the general solution of Liouville's equation can be found by using Wirtinger calculus. Its form is given by
where is any meromorphic function such that
for every .
has at most simple poles in .
Application
Liouville's equation can be used to prove the following classification results for surfaces:
. A surface in the Euclidean 3-space with metric , and with constant scalar curvature is locally isometric to:
the sphere if ;
the Euclidean plane if ;
the Lobachevskian plane if .
See also
Liouville field theory, a two-dimensional conformal field theory whose classical equation of motion is a generalization of Liouville's equation
Notes
Citations
Works cited
.
.
, translated into English by Mary Frances Winston Newson as .
Differential equations
Differential geometry | Liouville's equation | [
"Mathematics"
] | 556 | [
"Mathematical objects",
"Differential equations",
"Equations"
] |
17,218,394 | https://en.wikipedia.org/wiki/Sulston%20score | The Sulston score is an equation used in DNA mapping to numerically assess the likelihood that a given "fingerprint" similarity between two DNA clones is merely a result of chance. Used as such, it is a test of statistical significance. That is, low values imply that similarity is significant, suggesting that two DNA clones overlap one another and that the given similarity is not just a chance event. The name is an eponym that refers to John Sulston by virtue of his being the lead author of the paper that first proposed the equation's use.
The overlap problem in mapping
Each clone in a DNA mapping project has a "fingerprint", i.e. a set of DNA fragment lengths inferred from (1) enzymatically digesting the clone, (2) separating these fragments on a gel, and (3) estimating their lengths based on gel location. For each pairwise clone comparison, one can establish how many lengths from each set match-up. Cases having at least 1 match indicate that the clones might overlap because matches may represent the same DNA. However, the underlying sequences for each match are not known. Consequently, two fragments whose lengths match may still represent different sequences. In other words, matches do not conclusively indicate overlaps. The problem is instead one of using matches to probabilistically classify overlap status.
Mathematical scores in overlap assessment
Biologists have used a variety of means (often in combination) to discern clone overlaps in DNA mapping projects. While many are biological, i.e. looking for shared markers, others are basically mathematical, usually adopting probabilistic and/or statistical approaches.
Sulston score exposition
The Sulston score is rooted in the concepts of Bernoulli and binomial processes, as follows. Consider two clones, and , having and measured fragment lengths, respectively, where . That is, clone has at least as many fragments as clone , but usually more. The Sulston score is the probability that at least fragment lengths on clone will be matched by any combination of lengths on . Intuitively, we see that, at most, there can be matches. Thus, for a given comparison between two clones, one can measure the statistical significance of a match of fragments, i.e. how likely it is that this match occurred simply as a result of random chance. Very low values would indicate a significant match that is highly unlikely to have arisen by pure chance, while higher values would suggest that the given match could be just a coincidence.
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
!Derivation of the Sulston Score
|-
|One of the basic assumptions is that fragments are uniformly distributed on a gel, i.e. a fragment has an equal likelihood of appearing anywhere on the gel. Since gel position is an indicator of fragment length, this assumption is equivalent to presuming that the fragment lengths are uniformly distributed. The measured location of any fragment , has an associated error tolerance of , so that its true location is only known to lie within the segment .
In what follows, let us refer to individual fragment lengths simply as lengths. Consider a specific length on clone and a specific length on clone . These two lengths are arbitrarily selected from their respective sets and . We assume that the gel location of fragment has been determined and we want
the probability of the event that the location of fragment will match that of . Geometrically, will be declared to match if it falls inside the window of size around . Since fragment could occur anywhere in the gel of length , we have . The probability that does not match is simply the complement, i.e. , since it must either match or not match.
Now, let us expand this to compute the probability that no length on clone matches the single particular length on clone . This is simply the intersection of all individual trials where the event occurs, i.e. . This can be restated verbally as: length 1 on clone does not match length on clone and length 2 does not match length and length 3 does not match, etc. Since each of these trials is assumed to be independent, the probability is simply
Of course, the actual event of interest is the complement: i.e. there is not "no matches". In other words, the probability of one or more matches is . Formally, is the probability that at least one band on clone matches band on clone .
This event is taken as a Bernoulli trial having a "success" (matching) probability of for band . However, we want to describe the process over all the bands on clone . Since is constant, the number of matches is distributed binomially. Given observed matches, the Sulston score is simply the probability of obtaining at least matches by chance according to
where are binomial coefficients.
|}
Mathematical refinement
In a 2005 paper, Michael Wendl gave an example showing that the assumption of independent trials is not valid. So, although the traditional Sulston score does indeed represent a probability distribution, it is not actually the distribution characteristic of the fingerprint problem. Wendl went on to give the general solution for this problem in terms of the Bell polynomials, showing the traditional score overpredicts P-values by orders of magnitude. (P-values are very small in this problem, so we are talking, for example, about probabilities on the order of 10×10−14 versus 10×10−12, the latter Sulston value being 2 orders of magnitude too high.) This solution provides a basis for determining when a problem has sufficient information content to be treated by the probabilistic approach and is also a general solution to the birthday problem of 2 types.
A disadvantage of the exact solution is that its evaluation is computationally intensive and, in fact, is not feasible for comparing large clones. Some fast approximations for this problem have been proposed.
References
See also
FPC: a widely used fingerprint mapping program that utilizes the Sulston Score
Bioinformatics
Mathematical and theoretical biology | Sulston score | [
"Mathematics",
"Engineering",
"Biology"
] | 1,241 | [
"Bioinformatics",
"Applied mathematics",
"Biological engineering",
"Mathematical and theoretical biology"
] |
20,592,740 | https://en.wikipedia.org/wiki/DO-160 | DO-160, Environmental Conditions and Test Procedures for Airborne Equipment is a standard for the environmental testing of avionics hardware. It is published by the Radio Technical Commission for Aeronautics (RTCA) and supersedes DO-138.
Outline of contents
Introduction
The DO-160 document was first published on February 28, 1975 to specify test conditions for the design of avionics electronic hardware in airborne systems. Since then the standard has undergone subsequent revisions up through Revision G.
Purpose
This document outlines a set of minimal standard environmental test conditions (categories) and corresponding test procedures for airborne equipment for the entire spectrum of aircraft from light general aviation aircraft and helicopters through the jumbo jets and supersonic transport categories of aircraft. The purpose of these tests is to provide a controlled (laboratory) means of assuring the performance characteristics of airborne equipment in environmental conditions similar of those which may be encountered in airborne operation of the equipment.
The standard environmental test conditions and test procedures contained within the standard, may be used in conjunction with applicable equipment performance standards, as a minimum specification under environmental conditions, which can ensure an adequate degree of confidence in performance during use aboard an air vehicle.
The Standard Includes Sections on:
The user of the standard must also decide interdependently of the standard, how much additional test margin to allow for uncertainty of test conditions and measurement in each test.
Version history
RTCA/DO-160, RTCA, INC., February 28, 1975
RTCA/DO-160 A, RTCA, INC., January 25, 1980
RTCA/DO-160 B, RTCA, INC., July 20, 1984
RTCA/DO-160 C, RTCA, INC., December 4, 1989
RTCA/DO-160 C, Change 1, RTCA, INC., September 27, 1990
RTCA/DO-160 C, Change 2, RTCA, INC., June 19, 1992
RTCA/DO-160 C, Change 3, RTCA, INC., May 13, 1993
RTCA/DO-160 D, RTCA, INC., July 29, 1997
RTCA/DO-160 D Change 1, RTCA, INC., December 14, 2000
RTCA/DO-160 D Change 2, RTCA, INC., June 12, 2001
RTCA/DO-160 D Change 3, RTCA, INC., December 5, 2002
RTCA/DO-160 E, RTCA, INC., December 9, 2004
RTCA/DO-160 F, RTCA, INC., December 6, 2007
RTCA/DO-160 G, RTCA, INC., December 8, 2010
RTCA/DO-160 G Change 1, RTCA, INC., December 16, 2014
Resources
FAR Part 23/25 §1301/§1309
FAR Part 27/29
AC 23/25.1309
RTCA DO-160
Bibliography
Aircraft Systems: Mechanical, Electrical and Avionics Subsystems Integration (Aerospace Series (PEP)) (Jun 3, 2008) by Ian Moir and Allan Seabridge
RTCA List of Available Documents, RTCA Inc., https://web.archive.org/web/20130512172348/http://www.rtca.org/Files/ListofAvailableDocsMarch2013.pdf (March 2013)
Avionics: Development and Implementation (Electrical Engineering Handbook) by Cary R. Spitzer (Hardcover - Dec 15, 2006)
Avionics Navigation Systems (April 1997) by Myron Kayton and Walter R. Fried
http://www.rvs.uni-bielefeld.de/publications/Incidents/DOCS/Research/Rvs/Article/EMI.html
The European Organization for Civil Aviation Equipment EUROCAE ED-14
Certification in Europe
Replace FAA with EASA, JAA or CAA
Replace FAR with JAR
Replace AC with AMJ
See also
Hazard analysis
Reliability (semiconductor)
MIL-STD-810
ARP4754
ARP4761
RTCA/DO-254
RTCA/DO-178C
External links
Electronic design
RTCA standards
Avionics
Environmental testing
Aviation standards | DO-160 | [
"Technology",
"Engineering"
] | 855 | [
"Reliability engineering",
"Electronic design",
"Avionics",
"Electronic engineering",
"Aircraft instruments",
"Environmental testing",
"Design"
] |
20,593,510 | https://en.wikipedia.org/wiki/CP2K | CP2K is a freely available (GPL) quantum chemistry and solid state physics program package, written in Fortran 2008, to perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems. It provides a general framework for different methods: density functional theory (DFT) using a mixed Gaussian and plane waves approach (GPW) via LDA, GGA, MP2, or RPA levels of theory, classical pair and many-body potentials, semi-empirical (AM1, PM3, MNDO, MNDOd, PM6) and tight-binding Hamiltonians, as well as Quantum Mechanics/Molecular Mechanics (QM/MM) hybrid schemes relying on the Gaussian Expansion of the Electrostatic Potential (GEEP). The Gaussian and Augmented Plane Waves method (GAPW) as an extension of the GPW method allows for all-electron calculations. CP2K can do simulations of molecular dynamics, metadynamics, Monte Carlo, Ehrenfest dynamics, vibrational analysis, core level spectroscopy, energy minimization, and transition state optimization using NEB or dimer method.
CP2K provides editor plugins for Vim and Emacs syntax highlighting, along with other tools for input generation and output processing.
The latest version 2024.2 was released on 6 August 2024.
See also
Car–Parrinello molecular dynamics
Computational chemistry
Molecular dynamics
Monte Carlo algorithm
Energy minimization
Quantum chemistry
Quantum chemistry computer programs
Ab initio quantum chemistry methods
Møller–Plesset perturbation theory
Hartree–Fock method
Random phase approximation
Density functional theory
Harris functional
Tight binding
Semi-empirical quantum chemistry method
Key Papers
External links
Official CP2K Website
Users' Forum
1st CP2K Tutorial: Enabling the power of imagination in MD Simulations
2nd CP2K Tutorial: Enabling the power of imagination in MD Simulations
CP2K User Tutorial: "Computational Spectroscopy"
Ascalaph, a 3rd party graphical shell for CP2K and other quantum chemistry software
References
Density functional theory software
Computational chemistry software
Molecular dynamics software
Monte Carlo software
Molecular modelling software
Monte Carlo molecular modelling software
Monte Carlo particle physics software
Chemistry software for Linux
Free chemistry software
Free physics software
Physics software
Scientific simulation software
Simulation software
Free science software
Science software for Linux
Science software
Free software programmed in Fortran | CP2K | [
"Physics",
"Chemistry"
] | 482 | [
"Molecular dynamics software",
"Molecular modelling software",
"Free chemistry software",
"Computational chemistry software",
"Chemistry software",
"Computational physics",
"Molecular modelling",
"Computational chemistry",
"Molecular dynamics",
"Density functional theory software",
"Chemistry so... |
20,593,858 | https://en.wikipedia.org/wiki/True%20vapor%20pressure | True vapor pressure (TVP) is a common measure of the volatility of petroleum distillate fuels. It is defined as the
equilibrium partial pressure exerted by a volatile organic liquid as a function of temperature as determined by the test method ASTM D 2879.
The true vapor pressure (TVP) at 100 °F differs slightly from the Reid vapor pressure (RVP) (per definition also at 100 °F), as it excludes dissolved fixed gases such as air. Conversions between the two can be found in AP 42, Fifth Edition, Volume I Chapter 7: Liquid Storage Tanks (p 7.1-54 and onwards)
References
External links
ASTM D2879 - 97(2007) Standard Test Method for Vapor Pressure-Temperature Relationship and Initial Decomposition Temperature of Liquids by Isoteniscope
USA's Environmental Protection Agency (EPA) publication AP-42, Compilation of Air Pollutant Emissions. Chapter 7
Chemical properties
Physical chemistry
Engineering thermodynamics
Natural gas
Oil refining
Petroleum production | True vapor pressure | [
"Physics",
"Chemistry",
"Engineering"
] | 206 | [
"Applied and interdisciplinary physics",
"Engineering thermodynamics",
"Petroleum technology",
"Thermodynamics",
"Oil refining",
"nan",
"Mechanical engineering",
"Physical chemistry",
"Physical chemistry stubs"
] |
20,595,692 | https://en.wikipedia.org/wiki/Sales%20process%20engineering | Sales process engineering is the systematic design of sales processes done in order to make sales more effective and efficient.
It can be applied in functions including sales, marketing, and customer service.
History
As early as 1900–1915, advocates of scientific management, such as Frederick Winslow Taylor and Harlow Stafford Person, recognized that their ideas could be applied not only to manual labour and skilled trades but also to management, professions, and sales. Person promoted an early form of sales process engineering. At the time, postwar senses of the terms sales process engineering and sales engineering did not yet exist; Person called his efforts "sales engineering".
Corporations the 1920s through 1960s sought to apply analysis and synthesis to improve business functions. After the publication of the paper "If Japan Can... Why Can't We?", the 1980s and 1990s saw the emergence of a variety of approaches, such as business process reengineering, Total Quality Management, Six Sigma, and Lean Manufacturing.
James Cortada was one of IBM's management consultants on market-driven quality. His book TQM for Sales and Marketing Management was the first attempt to explain the theory of TQM in a sales and marketing context. Todd Youngblood, another ex-IBMer, in his book The Dolphin and the Cow (2004) emphasized "three core principles": continuous improvement of the sales process, metrics to quantitatively judge the rate and degree of improvement, and a well-defined sales process. Meanwhile, another executive from IBM, Daniel Stowell, had participated in IBM's project known as the "Alternate Channels Marketing Test." The idea was to incorporate direct response marketing techniques to accomplish the job of direct salespeople, and the initiative was quite successful.
Paul Selden's "Sales Process Engineering, A Personal Workshop" was a further attempt to demonstrate the applicability of the theory and tools of quality management to the sales function.
Rationale
The sales decision process is a formalized sales process companies use to manage the decision process behind a sale. SDP “is a defined series of steps you follow as you guide prospects from initial contact to purchase.”
Reasons for having a well-thought-out sales process include seller and buyer risk management, standardized customer interaction during sales, and scalable revenue generation. Approaching the subject from a "process" point of view offers an opportunity to use design and improvement tools from other disciplines and process-oriented industries.
See also
Sales management
Theory of constraints
References
Bibliography
Customer experience
Process engineering
Sales
Personal selling
Promotion and marketing communications | Sales process engineering | [
"Engineering"
] | 512 | [
"Process engineering",
"Mechanical engineering by discipline"
] |
20,596,619 | https://en.wikipedia.org/wiki/Cell%20and%20Tissue%20Research | Cell and Tissue Research presents regular articles and reviews in the areas of molecular, cell, stem cell biology and tissue engineering. In particular, the journal provides a forum for publishing data that analyze the supracellular, integrative actions of gene products and their impact on the formation of tissue structure and function. Articles emphasize structure–function relationships as revealed by recombinant molecular technologies. The coordinating editor of the journal is Klaus Unsicker.
Subjects covered in journal
Areas of research frequently published in Cell and Tissue Research include: neurobiology, neuroendocrinology, endocrinology, reproductive biology, skeletal and immune systems, and development.
Editors
The coordinating editor of the journal is Klaus Unsicker, of the University of Heidelberg. Section editors are K. Unsicker, neurobiology/sense organs/endocrinology; M. Furutani-Seiki, Development/growth/regeneration; W.W. Franke, molecular/cell biology; Andreas Oksche and Horst-Werner Korf, neuroendocrinology; T. Pihlajaniemi, extracellular Extracellular matrix; D. Furst, muscle; Joseph Bonventre, kidney and related subjects; P. Sutovsky, reproductive biology; B. Singh, immunology/hematology; and V. Hartenstein, invertebrates.
See also
Autophagy (journal)
Cell Biology International
Cell Cycle (journal)
References
External links
Cell & Tissue Research
Springer Science+Business Media
SpringerLink.com
English-language journals
Molecular and cellular biology journals
Academic journals established in 1924 | Cell and Tissue Research | [
"Chemistry"
] | 326 | [
"Molecular and cellular biology journals",
"Molecular biology"
] |
20,596,852 | https://en.wikipedia.org/wiki/KFUPM%20Program%20of%20Industrial%20and%20Systems%20Engineering | The Industrial & Systems Engineering Program offers a Bachelor of Science degree in industrial engineering at the King Fahd University of Petroleum & Minerals (KFUPM) in the Kingdom of Saudi Arabia. With a total of 133 credit hours, the program covers the major areas of industrial engineering, such as operations research, production planning, inventory control, methods engineering, quality control, facility location, manufacturing, and facility layout.
History
The Industrial & Systems Engineering (ISE) program in the Systems Engineering Department was first introduced in 1984 and has been revised in 1996 based on the Accreditation Board for Engineering and Technology (ABET) recommendation after their first visit in 1993. The revision made in 1996 came after when the number of credit hours of the Bachelor of Science (B.Sc) was reduced from 141 to 133 credit hours. The program has received ABET accreditation extension in 2010.
Program courses
The ISE program has a total of 50 credit hours on required ISE courses, with the following descriptions:
Introduction to I&SE
Probability & Statistics
Regression for Industrial Engineering
Linear Control Systems
Numerical Methods
Operations Research I
Statistical Quality Control
Principles of Industrial Costing
Engineering Economics
Manufacturing Technology
Work and Process Improvement
Fundamental of Database Systems
Seminar
Industrial Engineering Design
Production Systems
Stochastic Systems Simulation
Operations Research II
Facility Layout and Location
Senior Design
External links
Codes of courses and description
Department website
Industrial engineering | KFUPM Program of Industrial and Systems Engineering | [
"Engineering"
] | 268 | [
"Industrial engineering"
] |
20,598,041 | https://en.wikipedia.org/wiki/Largest%20organisms | This article lists the largest organisms for various types of life and mostly considers extant species, which found on Earth can be determined according to various aspects of an organism's size, such as: mass, volume, area, length, height, or even genome size. Some organisms group together to form a superorganism (such as ants or bees), but such are not classed as single large organisms. The Great Barrier Reef is the world's largest structure composed of living entities, stretching but contains many organisms of many types of species.
When considering singular entities, the largest organisms are clonal colonies which can spread over large areas. Pando, a clonal colony of the quaking aspen tree, is widely considered to be the largest such organism by mass. Even if such colonies are excluded, trees retain their dominance of this listing, with the giant sequoia being the most massive tree. In 2006, a huge clonal colony of the seagrass Posidonia oceanica was discovered south of the island of Ibiza. At across, and estimated at 100,000 years old, it may be one of the largest and oldest clonal colonies on Earth.
Among animals, the largest species are all marine mammals, specifically whales. The blue whale is believed to be the largest animal to have ever lived. The living land animal classification is also dominated by mammals, with the African bush elephant being the largest of these.
Plants
The largest single-stem tree by wood volume and mass is the giant sequoia (Sequoiadendron giganteum), native to Sierra Nevada and California; it typically grows to a height of and in diameter.
The largest organism in the world, according to mass, is the aspen tree whose colonies of clones can grow up to in size. The largest such colony is Pando, in the Fishlake National Forest in Utah.
A form of flowering plant that far exceeds Pando as the largest organism on Earth in area and potentially also mass, is the giant marine plant, Posidonia australis, living in Shark Bay, Australia. Its length is about and it covers an area of . It is also among the oldest known clonal plants.
Another giant marine plant of the genus Posidonia, Posidonia oceanica discovered in the Mediterranean near the Balearic Islands, Spain may be the oldest living organism in the world, with an estimated age of 100,000 years.
The largest individual flower in the world is Rafflesia arnoldii, while the flowering plant with the largest unbranched inflorescence in the world is Amorphophallus titanum. Both are native to Sumatra in Indonesia.
Green algae
Green algae are photosynthetic unicellular and multicellular green plants that are related to land plants. The thallus of the unicellular mermaid's wineglass, Acetabularia, can grow to several inches (perhaps 0.1 to 0.2 m) in length. The fronds of the similarly unicellular, and invasive Caulerpa taxifolia can grow up to a foot (0.3 m) long.
Animals
Fungi
The largest living fungus may be a honey fungus of the species Armillaria ostoyae.
A mushroom of this type in the Malheur National Forest in the Blue Mountains of eastern Oregon, U.S. was found to be the largest fungal colony in the world, spanning of area. This organism is estimated to be 2,400 years old. The fungus was written about in the April 2003 issue of the Canadian Journal of Forest Research. If this colony is considered a single organism, then it is the largest known organism in the world by area, and rivals the aspen grove "Pando" as the known organism with the highest living biomass. It is not known, however, whether it is a single organism with all parts of the mycelium connected. Approximations of the land area of the Oregon "humongous fungus" are (, possibly weighing as much as 35,000 tons as the world's most massive living organism.
A spatial genetic analysis estimated that a specimen of Armillaria ostoyae growing over in northern Michigan, United States weighs 440 tons (4 x 105 kg).
In Armillaria ostoyae, each individual mushroom (the fruiting body, similar to a flower on a plant) has only a stipe, and a pileus up to across. There are many other fungi which produce a larger individual size mushroom. The largest known fruiting body of a fungus is a specimen of Phellinus ellipsoideus (formerly Fomitiporia ellipsoidea) found on Hainan Island. The fruiting body masses up to .
Until P. ellipsoideus replaced it, the largest individual fruit body came from Rigidoporus ulmarius. R. ulmarius can grow up to , tall, across, and has a circumference of up to .
Lichen
Umbilicaria mammulata is among the largest lichens in the world. The thallus of U. mammulata is usually in diameter, but specimens have been known to reach in the Smoky Mountains of Tennessee.
The longest lichen is Usnea longissima, which may grow to exceed in length.
Protists
(Note: the group Protista is not used in current taxonomy.)
Amoebozoans (Amoebozoa)
Among the organisms that are not multicellular, the largest are the slime molds, such as Physarum polycephalum, some of which can reach a diameter over . These organisms are unicellular, but they are multinucleate.
Euglenozoans (Euglenozoa)
Some euglenophytes, such as certain species of Euglena, reach lengths of 400 μm.
Rhizarians (Rhizaria)
The largest species traditionally considered protozoa are giant amoeboids like foraminiferans. One such species, the xenophyophore Syringammina fragilissima, can attain a size of .
Alveolates (Alveolata)
The largest ciliates, such as Spirostomum, can attain a length over .
Stramenopiles (Stramenopila)
The largest stramenopiles are giant kelp from the northwestern Pacific. The floating stem of Macrocystis pyrifera can grow to a height of over .
Macrocystis also qualifies as the largest brown alga, the largest chromist, and the largest protist generally.
Bacteria
The largest known species of bacterium is named Thiomargarita magnifica, which grows to in length, making it visible to the naked eye and also about five thousand times the size of more typical bacteria. BBC News described it as possessing the "size and shape of a human eyelash." Science published a new paper on the bacterium on June 23, 2022. According to a study coauthored by Jean-Marie Volland, a marine biologist and scientist at California's Laboratory for Research in Complex Systems, and an affiliate at the US Department of Energy Joint Genome Institute, T. magnifica can grow up to 2 centimeters long.
Cyanobacteria
One of the largest "blue green algae" is Lyngbya, whose filamentous cells can be 50 μm wide.
Viruses
The largest virus on record is Megaklothovirus horridgei, with the length of 4 micrometres, comparable to the typical size of a bacterium and large enough to be seen in light microscopes. It was discovered in 2018 (being mistaken for bristles beforehand), having been found on an arrow worm in the genus Spadella. Prior to this discovery, the largest virus was the peculiar virus genus Pandoravirus, which have a size of approximately 1 micrometer and whose genome contains 1,900,000 to 2,500,000 base pairs of DNA.
Pandoravirus infects amoebas specifically, however Megaklothovirus infects Spadella arrow worms.
See also
Charismatic megafauna
Deep-sea gigantism
Genome size
Island gigantism
Largest body part
Largest prehistoric animals
List of longest-living organisms
List of heaviest land mammals
List of world records held by plants
List of largest inflorescences
Lists of organisms by population
List of longest vines
Megafauna
Smallest organisms
Superorganism
References
Notes
Citations
External links
10 of the largest living things on the planet Melissa Breyer. TreeHugger April 28, 2015
Articles containing video clips | Largest organisms | [
"Biology"
] | 1,765 | [
"Largest organisms",
"Organism size"
] |
20,598,535 | https://en.wikipedia.org/wiki/AIDS%20Vaccine%20Advocacy%20Coalition | AVAC is a New York City-based international non-profit community- and consumer-based organization working to accelerate ethical development and delivery of AIDS vaccines and other HIV prevention options to populations throughout the world. Founded in 1995, AVAC uses public education, policy analysis, advocacy and Community Mobilization to accelerate a comprehensive response to the epidemic.
AVAC's goal is to involve affected populations in work to promote the ethical introduction and distribution of life-saving HIV/AIDS technologies such as vaccines and microbicides. AVAC works to provide independent analysis, policy advocacy, public education and mobilisation to enhance AIDS vaccine research and development.
In 2023 AVACs Country Director in Malawi, Ulanda Mtamba was recognised as one of the BBC 100 Women.
Partners
Funders include The Bill and Melinda Gates Foundation, the Ford Foundation, the International AIDS Vaccine Initiative, Until There's a Cure Foundation, Broadway Cares/Equity Fights AIDS, the Gill Foundation, and the Overbrook Foundation.
References
Vaccination in the United States
Non-profit organizations based in New York City
HIV vaccine research
Organizations established in 1995
HIV/AIDS organizations in the United States
Vaccination-related organizations | AIDS Vaccine Advocacy Coalition | [
"Chemistry"
] | 238 | [
"HIV vaccine research",
"Drug discovery"
] |
20,598,932 | https://en.wikipedia.org/wiki/Hilbert%20space | In mathematics, Hilbert spaces (named after David Hilbert) allow the methods of linear algebra and calculus to be generalized from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as function spaces. Formally, a Hilbert space is a vector space equipped with an inner product that induces a distance function for which the space is a complete metric space. A Hilbert space is a special case of a Banach space.
Hilbert spaces were studied beginning in the first decade of the 20th century by David Hilbert, Erhard Schmidt, and Frigyes Riesz. They are indispensable tools in the theories of partial differential equations, quantum mechanics, Fourier analysis (which includes applications to signal processing and heat transfer), and ergodic theory (which forms the mathematical underpinning of thermodynamics). John von Neumann coined the term Hilbert space for the abstract concept that underlies many of these diverse applications. The success of Hilbert space methods ushered in a very fruitful era for functional analysis. Apart from the classical Euclidean vector spaces, examples of Hilbert spaces include spaces of square-integrable functions, spaces of sequences, Sobolev spaces consisting of generalized functions, and Hardy spaces of holomorphic functions.
Geometric intuition plays an important role in many aspects of Hilbert space theory. Exact analogs of the Pythagorean theorem and parallelogram law hold in a Hilbert space. At a deeper level, perpendicular projection onto a linear subspace plays a significant role in optimization problems and other aspects of the theory. An element of a Hilbert space can be uniquely specified by its coordinates with respect to an orthonormal basis, in analogy with Cartesian coordinates in classical geometry. When this basis is countably infinite, it allows identifying the Hilbert space with the space of the infinite sequences that are square-summable. The latter space is often in the older literature referred to as the Hilbert space.
Definition and illustration
Motivating example: Euclidean vector space
One of the most familiar examples of a Hilbert space is the Euclidean vector space consisting of three-dimensional vectors, denoted by , and equipped with the dot product. The dot product takes two vectors and , and produces a real number . If and are represented in Cartesian coordinates, then the dot product is defined by
The dot product satisfies the properties
It is symmetric in and : .
It is linear in its first argument: for any scalars , , and vectors , , and .
It is positive definite: for all vectors , , with equality if and only if .
An operation on pairs of vectors that, like the dot product, satisfies these three properties is known as a (real) inner product. A vector space equipped with such an inner product is known as a (real) inner product space. Every finite-dimensional inner product space is also a Hilbert space. The basic feature of the dot product that connects it with Euclidean geometry is that it is related to both the length (or norm) of a vector, denoted , and to the angle between two vectors and by means of the formula
Multivariable calculus in Euclidean space relies on the ability to compute limits, and to have useful criteria for concluding that limits exist. A mathematical series
consisting of vectors in is absolutely convergent provided that the sum of the lengths converges as an ordinary series of real numbers:
Just as with a series of scalars, a series of vectors that converges absolutely also converges to some limit vector in the Euclidean space, in the sense that
This property expresses the completeness of Euclidean space: that a series that converges absolutely also converges in the ordinary sense.
Hilbert spaces are often taken over the complex numbers. The complex plane denoted by is equipped with a notion of magnitude, the complex modulus , which is defined as the square root of the product of with its complex conjugate:
If is a decomposition of into its real and imaginary parts, then the modulus is the usual Euclidean two-dimensional length:
The inner product of a pair of complex numbers and is the product of with the complex conjugate of :
This is complex-valued. The real part of gives the usual two-dimensional Euclidean dot product.
A second example is the space whose elements are pairs of complex numbers . Then an inner product of with another such vector is given by
The real part of is then the four-dimensional Euclidean dot product. This inner product is Hermitian symmetric, which means that the result of interchanging and is the complex conjugate:
Definition
A is a real or complex inner product space that is also a complete metric space with respect to the distance function induced by the inner product.
To say that a complex vector space is a means that there is an inner product associating a complex number to each pair of elements of that satisfies the following properties:
The inner product is conjugate symmetric; that is, the inner product of a pair of elements is equal to the complex conjugate of the inner product of the swapped elements: Importantly, this implies that is a real number.
The inner product is linear in its first argument. For all complex numbers and
The inner product of an element with itself is positive definite:
It follows from properties 1 and 2 that a complex inner product is , also called , in its second argument, meaning that
A is defined in the same way, except that is a real vector space and the inner product takes real values. Such an inner product will be a bilinear map and will form a dual system.
The norm is the real-valued function
and the distance between two points in is defined in terms of the norm by
That this function is a distance function means firstly that it is symmetric in and secondly that the distance between and itself is zero, and otherwise the distance between and must be positive, and lastly that the triangle inequality holds, meaning that the length of one leg of a triangle cannot exceed the sum of the lengths of the other two legs:
This last property is ultimately a consequence of the more fundamental Cauchy–Schwarz inequality, which asserts
with equality if and only if and are linearly dependent.
With a distance function defined in this way, any inner product space is a metric space, and sometimes is known as a . Any pre-Hilbert space that is additionally also a complete space is a Hilbert space.
The of is expressed using a form of the Cauchy criterion for sequences in : a pre-Hilbert space is complete if every Cauchy sequence converges with respect to this norm to an element in the space. Completeness can be characterized by the following equivalent condition: if a series of vectors
converges absolutely in the sense that
then the series converges in , in the sense that the partial sums converge to an element of .
As a complete normed space, Hilbert spaces are by definition also Banach spaces. As such they are topological vector spaces, in which topological notions like the openness and closedness of subsets are well defined. Of special importance is the notion of a closed linear subspace of a Hilbert space that, with the inner product induced by restriction, is also complete (being a closed set in a complete metric space) and therefore a Hilbert space in its own right.
Second example: sequence spaces
The sequence space consists of all infinite sequences of complex numbers such that the following series converges:
The inner product on is defined by:
This second series converges as a consequence of the Cauchy–Schwarz inequality and the convergence of the previous series.
Completeness of the space holds provided that whenever a series of elements from converges absolutely (in norm), then it converges to an element of . The proof is basic in mathematical analysis, and permits mathematical series of elements of the space to be manipulated with the same ease as series of complex numbers (or vectors in a finite-dimensional Euclidean space).
History
Prior to the development of Hilbert spaces, other generalizations of Euclidean spaces were known to mathematicians and physicists. In particular, the idea of an abstract linear space (vector space) had gained some traction towards the end of the 19th century: this is a space whose elements can be added together and multiplied by scalars (such as real or complex numbers) without necessarily identifying these elements with "geometric" vectors, such as position and momentum vectors in physical systems. Other objects studied by mathematicians at the turn of the 20th century, in particular spaces of sequences (including series) and spaces of functions, can naturally be thought of as linear spaces. Functions, for instance, can be added together or multiplied by constant scalars, and these operations obey the algebraic laws satisfied by addition and scalar multiplication of spatial vectors.
In the first decade of the 20th century, parallel developments led to the introduction of Hilbert spaces. The first of these was the observation, which arose during David Hilbert and Erhard Schmidt's study of integral equations, that two square-integrable real-valued functions and on an interval have an inner product
that has many of the familiar properties of the Euclidean dot product. In particular, the idea of an orthogonal family of functions has meaning. Schmidt exploited the similarity of this inner product with the usual dot product to prove an analog of the spectral decomposition for an operator of the form
where is a continuous function symmetric in and . The resulting eigenfunction expansion expresses the function as a series of the form
where the functions are orthogonal in the sense that for all . The individual terms in this series are sometimes referred to as elementary product solutions. However, there are eigenfunction expansions that fail to converge in a suitable sense to a square-integrable function: the missing ingredient, which ensures convergence, is completeness.
The second development was the Lebesgue integral, an alternative to the Riemann integral introduced by Henri Lebesgue in 1904. The Lebesgue integral made it possible to integrate a much broader class of functions. In 1907, Frigyes Riesz and Ernst Sigismund Fischer independently proved that the space of square Lebesgue-integrable functions is a complete metric space. As a consequence of the interplay between geometry and completeness, the 19th century results of Joseph Fourier, Friedrich Bessel and Marc-Antoine Parseval on trigonometric series easily carried over to these more general spaces, resulting in a geometrical and analytical apparatus now usually known as the Riesz–Fischer theorem.
Further basic results were proved in the early 20th century. For example, the Riesz representation theorem was independently established by Maurice Fréchet and Frigyes Riesz in 1907. John von Neumann coined the term abstract Hilbert space in his work on unbounded Hermitian operators. Although other mathematicians such as Hermann Weyl and Norbert Wiener had already studied particular Hilbert spaces in great detail, often from a physically motivated point of view, von Neumann gave the first complete and axiomatic treatment of them. Von Neumann later used them in his seminal work on the foundations of quantum mechanics, and in his continued work with Eugene Wigner. The name "Hilbert space" was soon adopted by others, for example by Hermann Weyl in his book on quantum mechanics and the theory of groups.
The significance of the concept of a Hilbert space was underlined with the realization that it offers one of the best mathematical formulations of quantum mechanics. In short, the states of a quantum mechanical system are vectors in a certain Hilbert space, the observables are hermitian operators on that space, the symmetries of the system are unitary operators, and measurements are orthogonal projections. The relation between quantum mechanical symmetries and unitary operators provided an impetus for the development of the unitary representation theory of groups, initiated in the 1928 work of Hermann Weyl. On the other hand, in the early 1930s it became clear that classical mechanics can be described in terms of Hilbert space (Koopman–von Neumann classical mechanics) and that certain properties of classical dynamical systems can be analyzed using Hilbert space techniques in the framework of ergodic theory.
The algebra of observables in quantum mechanics is naturally an algebra of operators defined on a Hilbert space, according to Werner Heisenberg's matrix mechanics formulation of quantum theory. Von Neumann began investigating operator algebras in the 1930s, as rings of operators on a Hilbert space. The kind of algebras studied by von Neumann and his contemporaries are now known as von Neumann algebras. In the 1940s, Israel Gelfand, Mark Naimark and Irving Segal gave a definition of a kind of operator algebras called C*-algebras that on the one hand made no reference to an underlying Hilbert space, and on the other extrapolated many of the useful features of the operator algebras that had previously been studied. The spectral theorem for self-adjoint operators in particular that underlies much of the existing Hilbert space theory was generalized to C*-algebras. These techniques are now basic in abstract harmonic analysis and representation theory.
Examples
Lebesgue spaces
Lebesgue spaces are function spaces associated to measure spaces , where is a set, is a σ-algebra of subsets of , and is a countably additive measure on . Let be the space of those complex-valued measurable functions on for which the Lebesgue integral of the square of the absolute value of the function is finite, i.e., for a function in ,
and where functions are identified if and only if they differ only on a set of measure zero.
The inner product of functions and in is then defined as
or
where the second form (conjugation of the first element) is commonly found in the theoretical physics literature. For and in , the integral exists because of the Cauchy–Schwarz inequality, and defines an inner product on the space. Equipped with this inner product, is in fact complete. The Lebesgue integral is essential to ensure completeness: on domains of real numbers, for instance, not enough functions are Riemann integrable.
The Lebesgue spaces appear in many natural settings. The spaces and of square-integrable functions with respect to the Lebesgue measure on the real line and unit interval, respectively, are natural domains on which to define the Fourier transform and Fourier series. In other situations, the measure may be something other than the ordinary Lebesgue measure on the real line. For instance, if is any positive measurable function, the space of all measurable functions on the interval satisfying
is called the weighted space , and is called the weight function. The inner product is defined by
The weighted space is identical with the Hilbert space where the measure of a Lebesgue-measurable set is defined by
Weighted spaces like this are frequently used to study orthogonal polynomials, because different families of orthogonal polynomials are orthogonal with respect to different weighting functions.
Sobolev spaces
Sobolev spaces, denoted by or , are Hilbert spaces. These are a special kind of function space in which differentiation may be performed, but that (unlike other Banach spaces such as the Hölder spaces) support the structure of an inner product. Because differentiation is permitted, Sobolev spaces are a convenient setting for the theory of partial differential equations. They also form the basis of the theory of direct methods in the calculus of variations.
For a non-negative integer and , the Sobolev space contains functions whose weak derivatives of order up to are also . The inner product in is
where the dot indicates the dot product in the Euclidean space of partial derivatives of each order. Sobolev spaces can also be defined when is not an integer.
Sobolev spaces are also studied from the point of view of spectral theory, relying more specifically on the Hilbert space structure. If is a suitable domain, then one can define the Sobolev space as the space of Bessel potentials; roughly,
Here is the Laplacian and is understood in terms of the spectral mapping theorem. Apart from providing a workable definition of Sobolev spaces for non-integer , this definition also has particularly desirable properties under the Fourier transform that make it ideal for the study of pseudodifferential operators. Using these methods on a compact Riemannian manifold, one can obtain for instance the Hodge decomposition, which is the basis of Hodge theory.
Spaces of holomorphic functions
Hardy spaces
The Hardy spaces are function spaces, arising in complex analysis and harmonic analysis, whose elements are certain holomorphic functions in a complex domain. Let denote the unit disc in the complex plane. Then the Hardy space is defined as the space of holomorphic functions on such that the means
remain bounded for . The norm on this Hardy space is defined by
Hardy spaces in the disc are related to Fourier series. A function is in if and only if
where
Thus consists of those functions that are L2 on the circle, and whose negative frequency Fourier coefficients vanish.
Bergman spaces
The Bergman spaces are another family of Hilbert spaces of holomorphic functions. Let be a bounded open set in the complex plane (or a higher-dimensional complex space) and let be the space of holomorphic functions in that are also in in the sense that
where the integral is taken with respect to the Lebesgue measure in . Clearly is a subspace of ; in fact, it is a closed subspace, and so a Hilbert space in its own right. This is a consequence of the estimate, valid on compact subsets of , that
which in turn follows from Cauchy's integral formula. Thus convergence of a sequence of holomorphic functions in implies also compact convergence, and so the limit function is also holomorphic. Another consequence of this inequality is that the linear functional that evaluates a function at a point of is actually continuous on . The Riesz representation theorem implies that the evaluation functional can be represented as an element of . Thus, for every , there is a function such that
for all . The integrand
is known as the Bergman kernel of . This integral kernel satisfies a reproducing property
A Bergman space is an example of a reproducing kernel Hilbert space, which is a Hilbert space of functions along with a kernel that verifies a reproducing property analogous to this one. The Hardy space also admits a reproducing kernel, known as the Szegő kernel. Reproducing kernels are common in other areas of mathematics as well. For instance, in harmonic analysis the Poisson kernel is a reproducing kernel for the Hilbert space of square-integrable harmonic functions in the unit ball. That the latter is a Hilbert space at all is a consequence of the mean value theorem for harmonic functions.
Applications
Many of the applications of Hilbert spaces exploit the fact that Hilbert spaces support generalizations of simple geometric concepts like projection and change of basis from their usual finite dimensional setting. In particular, the spectral theory of continuous self-adjoint linear operators on a Hilbert space generalizes the usual spectral decomposition of a matrix, and this often plays a major role in applications of the theory to other areas of mathematics and physics.
Sturm–Liouville theory
In the theory of ordinary differential equations, spectral methods on a suitable Hilbert space are used to study the behavior of eigenvalues and eigenfunctions of differential equations. For example, the Sturm–Liouville problem arises in the study of the harmonics of waves in a violin string or a drum, and is a central problem in ordinary differential equations. The problem is a differential equation of the form
for an unknown function on an interval , satisfying general homogeneous Robin boundary conditions
The functions , , and are given in advance, and the problem is to find the function and constants for which the equation has a solution. The problem only has solutions for certain values of , called eigenvalues of the system, and this is a consequence of the spectral theorem for compact operators applied to the integral operator defined by the Green's function for the system. Furthermore, another consequence of this general result is that the eigenvalues of the system can be arranged in an increasing sequence tending to infinity.
Partial differential equations
Hilbert spaces form a basic tool in the study of partial differential equations. For many classes of partial differential equations, such as linear elliptic equations, it is possible to consider a generalized solution (known as a weak solution) by enlarging the class of functions. Many weak formulations involve the class of Sobolev functions, which is a Hilbert space. A suitable weak formulation reduces to a geometrical problem, the analytic problem of finding a solution or, often what is more important, showing that a solution exists and is unique for given boundary data. For linear elliptic equations, one geometrical result that ensures unique solvability for a large class of problems is the Lax–Milgram theorem. This strategy forms the rudiment of the Galerkin method (a finite element method) for numerical solution of partial differential equations.
A typical example is the Poisson equation with Dirichlet boundary conditions in a bounded domain in . The weak formulation consists of finding a function such that, for all continuously differentiable functions in vanishing on the boundary:
This can be recast in terms of the Hilbert space consisting of functions such that , along with its weak partial derivatives, are square integrable on , and vanish on the boundary. The question then reduces to finding in this space such that for all in this space
where is a continuous bilinear form, and is a continuous linear functional, given respectively by
Since the Poisson equation is elliptic, it follows from Poincaré's inequality that the bilinear form is coercive. The Lax–Milgram theorem then ensures the existence and uniqueness of solutions of this equation.
Hilbert spaces allow for many elliptic partial differential equations to be formulated in a similar way, and the Lax–Milgram theorem is then a basic tool in their analysis. With suitable modifications, similar techniques can be applied to parabolic partial differential equations and certain hyperbolic partial differential equations.
Ergodic theory
The field of ergodic theory is the study of the long-term behavior of chaotic dynamical systems. The protypical case of a field that ergodic theory applies to is thermodynamics, in which—though the microscopic state of a system is extremely complicated (it is impossible to understand the ensemble of individual collisions between particles of matter)—the average behavior over sufficiently long time intervals is tractable. The laws of thermodynamics are assertions about such average behavior. In particular, one formulation of the zeroth law of thermodynamics asserts that over sufficiently long timescales, the only functionally independent measurement that one can make of a thermodynamic system in equilibrium is its total energy, in the form of temperature.
An ergodic dynamical system is one for which, apart from the energy—measured by the Hamiltonian—there are no other functionally independent conserved quantities on the phase space. More explicitly, suppose that the energy is fixed, and let be the subset of the phase space consisting of all states of energy (an energy surface), and let denote the evolution operator on the phase space. The dynamical system is ergodic if every invariant measurable functions on is constant almost everywhere. An invariant function is one for which
for all on and all time . Liouville's theorem implies that there exists a measure on the energy surface that is invariant under the time translation. As a result, time translation is a unitary transformation of the Hilbert space consisting of square-integrable functions on the energy surface with respect to the inner product
The von Neumann mean ergodic theorem states the following:
If is a (strongly continuous) one-parameter semigroup of unitary operators on a Hilbert space , and is the orthogonal projection onto the space of common fixed points of , , then
For an ergodic system, the fixed set of the time evolution consists only of the constant functions, so the ergodic theorem implies the following: for any function ,
That is, the long time average of an observable is equal to its expectation value over an energy surface.
Fourier analysis
One of the basic goals of Fourier analysis is to decompose a function into a (possibly infinite) linear combination of given basis functions: the associated Fourier series. The classical Fourier series associated to a function defined on the interval is a series of the form
where
The example of adding up the first few terms in a Fourier series for a sawtooth function is shown in the figure. The basis functions are sine waves with wavelengths (for integer ) shorter than the wavelength of the sawtooth itself (except for , the fundamental wave).
A significant problem in classical Fourier series asks in what sense the Fourier series converges, if at all, to the function . Hilbert space methods provide one possible answer to this question. The functions form an orthogonal basis of the Hilbert space . Consequently, any square-integrable function can be expressed as a series
and, moreover, this series converges in the Hilbert space sense (that is, in the mean).
The problem can also be studied from the abstract point of view: every Hilbert space has an orthonormal basis, and every element of the Hilbert space can be written in a unique way as a sum of multiples of these basis elements. The coefficients appearing on these basis elements are sometimes known abstractly as the Fourier coefficients of the element of the space. The abstraction is especially useful when it is more natural to use different basis functions for a space such as . In many circumstances, it is desirable not to decompose a function into trigonometric functions, but rather into orthogonal polynomials or wavelets for instance, and in higher dimensions into spherical harmonics.
For instance, if are any orthonormal basis functions of , then a given function in can be approximated as a finite linear combination
The coefficients are selected to make the magnitude of the difference as small as possible. Geometrically, the best approximation is the orthogonal projection of onto the subspace consisting of all linear combinations of the , and can be calculated by
That this formula minimizes the difference is a consequence of Bessel's inequality and Parseval's formula.
In various applications to physical problems, a function can be decomposed into physically meaningful eigenfunctions of a differential operator (typically the Laplace operator): this forms the foundation for the spectral study of functions, in reference to the spectrum of the differential operator. A concrete physical application involves the problem of hearing the shape of a drum: given the fundamental modes of vibration that a drumhead is capable of producing, can one infer the shape of the drum itself? The mathematical formulation of this question involves the Dirichlet eigenvalues of the Laplace equation in the plane, that represent the fundamental modes of vibration in direct analogy with the integers that represent the fundamental modes of vibration of the violin string.
Spectral theory also underlies certain aspects of the Fourier transform of a function. Whereas Fourier analysis decomposes a function defined on a compact set into the discrete spectrum of the Laplacian (which corresponds to the vibrations of a violin string or drum), the Fourier transform of a function is the decomposition of a function defined on all of Euclidean space into its components in the continuous spectrum of the Laplacian. The Fourier transformation is also geometrical, in a sense made precise by the Plancherel theorem, that asserts that it is an isometry of one Hilbert space (the "time domain") with another (the "frequency domain"). This isometry property of the Fourier transformation is a recurring theme in abstract harmonic analysis (since it reflects the conservation of energy for the continuous Fourier Transform), as evidenced for instance by the Plancherel theorem for spherical functions occurring in noncommutative harmonic analysis.
Quantum mechanics
In the mathematically rigorous formulation of quantum mechanics, developed by John von Neumann, the possible states (more precisely, the pure states) of a quantum mechanical system are represented by unit vectors (called state vectors) residing in a complex separable Hilbert space, known as the state space, well defined up to a complex number of norm 1 (the phase factor). In other words, the possible states are points in the projectivization of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system; for example, the position and momentum states for a single non-relativistic spin zero particle is the space of all square-integrable functions, while the states for the spin of a single proton are unit elements of the two-dimensional complex Hilbert space of spinors. Each observable is represented by a self-adjoint linear operator acting on the state space. Each eigenstate of an observable corresponds to an eigenvector of the operator, and the associated eigenvalue corresponds to the value of the observable in that eigenstate.
The inner product between two state vectors is a complex number known as a probability amplitude. During an ideal measurement of a quantum mechanical system, the probability that a system collapses from a given initial state to a particular eigenstate is given by the square of the absolute value of the probability amplitudes between the initial and final states. The possible results of a measurement are the eigenvalues of the operator—which explains the choice of self-adjoint operators, for all the eigenvalues must be real. The probability distribution of an observable in a given state can be found by computing the spectral decomposition of the corresponding operator.
For a general system, states are typically not pure, but instead are represented as statistical mixtures of pure states, or mixed states, given by density matrices: self-adjoint operators of trace one on a Hilbert space. Moreover, for general quantum mechanical systems, the effects of a single measurement can influence other parts of a system in a manner that is described instead by a positive operator valued measure. Thus the structure both of the states and observables in the general theory is considerably more complicated than the idealization for pure states.
Probability theory
In probability theory, Hilbert spaces also have diverse applications. Here a fundamental Hilbert space is the space of random variables on a given probability space, having class (finite first and second moments). A common operation in statistics is that of centering a random variable by subtracting its expectation. Thus if is a random variable, then is its centering. In the Hilbert space view, this is the orthogonal projection of onto the kernel of the expectation operator, which a continuous linear functional on the Hilbert space (in fact, the inner product with the constant random variable 1), and so this kernel is a closed subspace.
The conditional expectation has a natural interpretation in the Hilbert space. Suppose that a probability space is given, where is a sigma algebra on the set , and is a probability measure on the measure space . If is a sigma subalgebra of , then the conditional expectation is the orthogonal projection of onto the subspace of consisting of the -measurable functions. If the random variable in is independent of the sigma algebra then conditional expectation , i.e., its projection onto the -measurable functions is constant. Equivalently, the projection of its centering is zero.
In particular, if two random variables and (in ) are independent, then the centered random variables and are orthogonal. (This means that the two variables have zero covariance: they are uncorrelated.) In that case, the Pythagorean theorem in the kernel of the expectation operator implies that the variances of and satisfy the identity:
sometimes called the Pythagorean theorem of statistics, and is of importance in linear regression. As puts it, "the analysis of variance may be viewed as the decomposition of the squared length of a vector into the sum of the squared lengths of several vectors, using the Pythagorean Theorem."
The theory of martingales can be formulated in Hilbert spaces. A martingale in a Hilbert space is a sequence of elements of a Hilbert space such that, for each , is the orthogonal projection of onto the linear hull of . If the are random variables, this reproduces the usual definition of a (discrete) martingale: the expectation of , conditioned on , is equal to .
Hilbert spaces are also used throughout the foundations of the Itô calculus. To any square-integrable martingale, it is possible to associate a Hilbert norm on the space of equivalence classes of progressively measurable processes with respect to the martingale (using the quadratic variation of the martingale as the measure). The Itô integral can be constructed by first defining it for simple processes, and then exploiting their density in the Hilbert space. A noteworthy result is then the Itô isometry, which attests that for any martingale M having quadratic variation measure , and any progressively measurable process H:
whenever the expectation on the right-hand side is finite.
A deeper application of Hilbert spaces that is especially important in the theory of Gaussian processes is an attempt, due to Leonard Gross and others, to make sense of certain formal integrals over infinite dimensional spaces like the Feynman path integral from quantum field theory. The problem with integral like this is that there is no infinite dimensional Lebesgue measure. The notion of an abstract Wiener space allows one to construct a measure on a Banach space that contains a Hilbert space , called the Cameron–Martin space, as a dense subset, out of a finitely additive cylinder set measure on . The resulting measure on is countably additive and invariant under translation by elements of , and this provides a mathematically rigorous way of thinking of the Wiener measure as a Gaussian measure on the Sobolev space .
Color perception
Any true physical color can be represented by a combination of pure spectral colors. As physical colors can be composed of any number of spectral colors, the space of physical colors may aptly be represented by a Hilbert space over spectral colors. Humans have three types of cone cells for color perception, so the perceivable colors can be represented by 3-dimensional Euclidean space. The many-to-one linear mapping from the Hilbert space of physical colors to the Euclidean space of human perceivable colors explains why many distinct physical colors may be perceived by humans to be identical (e.g., pure yellow light versus a mix of red and green light, see Metamerism).
Properties
Pythagorean identity
Two vectors and in a Hilbert space are orthogonal when . The notation for this is . More generally, when is a subset in , the notation means that is orthogonal to every element from .
When and are orthogonal, one has
By induction on , this is extended to any family of orthogonal vectors,
Whereas the Pythagorean identity as stated is valid in any inner product space, completeness is required for the extension of the Pythagorean identity to series. A series of orthogonal vectors converges in if and only if the series of squares of norms converges, and
Furthermore, the sum of a series of orthogonal vectors is independent of the order in which it is taken.
Parallelogram identity and polarization
By definition, every Hilbert space is also a Banach space. Furthermore, in every Hilbert space the following parallelogram identity holds:
Conversely, every Banach space in which the parallelogram identity holds is a Hilbert space, and the inner product is uniquely determined by the norm by the polarization identity. For real Hilbert spaces, the polarization identity is
For complex Hilbert spaces, it is
The parallelogram law implies that any Hilbert space is a uniformly convex Banach space.
Best approximation
This subsection employs the Hilbert projection theorem. If is a non-empty closed convex subset of a Hilbert space and a point in , there exists a unique point that minimizes the distance between and points in ,
This is equivalent to saying that there is a point with minimal norm in the translated convex set . The proof consists in showing that every minimizing sequence is Cauchy (using the parallelogram identity) hence converges (using completeness) to a point in that has minimal norm. More generally, this holds in any uniformly convex Banach space.
When this result is applied to a closed subspace of , it can be shown that the point closest to is characterized by
This point is the orthogonal projection of onto , and the mapping is linear (see ). This result is especially significant in applied mathematics, especially numerical analysis, where it forms the basis of least squares methods.
In particular, when is not equal to , one can find a nonzero vector orthogonal to (select and ). A very useful criterion is obtained by applying this observation to the closed subspace generated by a subset of .
A subset of spans a dense vector subspace if (and only if) the vector 0 is the sole vector orthogonal to .
Duality
The dual space is the space of all continuous linear functions from the space into the base field. It carries a natural norm, defined by
This norm satisfies the parallelogram law, and so the dual space is also an inner product space where this inner product can be defined in terms of this dual norm by using the polarization identity. The dual space is also complete so it is a Hilbert space in its own right.
If is a complete orthonormal basis for then the inner product on the dual space of any two is
where all but countably many of the terms in this series are zero.
The Riesz representation theorem affords a convenient description of the dual space. To every element of , there is a unique element of , defined by
where moreover,
The Riesz representation theorem states that the map from to defined by is surjective, which makes this map an isometric antilinear isomorphism. So to every element of the dual there exists one and only one in such that
for all . The inner product on the dual space satisfies
The reversal of order on the right-hand side restores linearity in from the antilinearity of . In the real case, the antilinear isomorphism from to its dual is actually an isomorphism, and so real Hilbert spaces are naturally isomorphic to their own duals.
The representing vector is obtained in the following way. When , the kernel is a closed vector subspace of , not equal to , hence there exists a nonzero vector orthogonal to . The vector is a suitable scalar multiple of . The requirement that yields
This correspondence is exploited by the bra–ket notation popular in physics. It is common in physics to assume that the inner product, denoted by , is linear on the right,
The result can be seen as the action of the linear functional (the bra) on the vector (the ket).
The Riesz representation theorem relies fundamentally not just on the presence of an inner product, but also on the completeness of the space. In fact, the theorem implies that the topological dual of any inner product space can be identified with its completion. An immediate consequence of the Riesz representation theorem is also that a Hilbert space is reflexive, meaning that the natural map from into its double dual space is an isomorphism.
Weakly convergent sequences
In a Hilbert space , a sequence is weakly convergent to a vector when
for every .
For example, any orthonormal sequence converges weakly to 0, as a consequence of Bessel's inequality. Every weakly convergent sequence is bounded, by the uniform boundedness principle.
Conversely, every bounded sequence in a Hilbert space admits weakly convergent subsequences (Alaoglu's theorem). This fact may be used to prove minimization results for continuous convex functionals, in the same way that the Bolzano–Weierstrass theorem is used for continuous functions on . Among several variants, one simple statement is as follows:
If is a convex continuous function such that tends to when tends to , then admits a minimum at some point .
This fact (and its various generalizations) are fundamental for direct methods in the calculus of variations. Minimization results for convex functionals are also a direct consequence of the slightly more abstract fact that closed bounded convex subsets in a Hilbert space are weakly compact, since is reflexive. The existence of weakly convergent subsequences is a special case of the Eberlein–Šmulian theorem.
Banach space properties
Any general property of Banach spaces continues to hold for Hilbert spaces. The open mapping theorem states that a continuous surjective linear transformation from one Banach space to another is an open mapping meaning that it sends open sets to open sets. A corollary is the bounded inverse theorem, that a continuous and bijective linear function from one Banach space to another is an isomorphism (that is, a continuous linear map whose inverse is also continuous). This theorem is considerably simpler to prove in the case of Hilbert spaces than in general Banach spaces. The open mapping theorem is equivalent to the closed graph theorem, which asserts that a linear function from one Banach space to another is continuous if and only if its graph is a closed set. In the case of Hilbert spaces, this is basic in the study of unbounded operators (see Closed operator).
The (geometrical) Hahn–Banach theorem asserts that a closed convex set can be separated from any point outside it by means of a hyperplane of the Hilbert space. This is an immediate consequence of the best approximation property: if is the element of a closed convex set closest to , then the separating hyperplane is the plane perpendicular to the segment passing through its midpoint.
Operators on Hilbert spaces
Bounded operators
The continuous linear operators from a Hilbert space to a second Hilbert space are bounded in the sense that they map bounded sets to bounded sets. Conversely, if an operator is bounded, then it is continuous. The space of such bounded linear operators has a norm, the operator norm given by
The sum and the composite of two bounded linear operators is again bounded and linear. For y in H2, the map that sends to is linear and continuous, and according to the Riesz representation theorem can therefore be represented in the form
for some vector in . This defines another bounded linear operator , the adjoint of . The adjoint satisfies . When the Riesz representation theorem is used to identify each Hilbert space with its continuous dual space, the adjoint of can be shown to be identical to the transpose of , which by definition sends to the functional
The set of all bounded linear operators on (meaning operators ), together with the addition and composition operations, the norm and the adjoint operation, is a C*-algebra, which is a type of operator algebra.
An element of is called 'self-adjoint' or 'Hermitian' if . If is Hermitian and for every , then is called 'nonnegative', written ; if equality holds only when , then is called 'positive'. The set of self adjoint operators admits a partial order, in which if . If has the form for some , then is nonnegative; if is invertible, then is positive. A converse is also true in the sense that, for a non-negative operator , there exists a unique non-negative square root such that
In a sense made precise by the spectral theorem, self-adjoint operators can usefully be thought of as operators that are "real". An element of is called normal if . Normal operators decompose into the sum of a self-adjoint operator and an imaginary multiple of a self adjoint operator
that commute with each other. Normal operators can also usefully be thought of in terms of their real and imaginary parts.
An element of is called unitary if is invertible and its inverse is given by . This can also be expressed by requiring that be onto and for all . The unitary operators form a group under composition, which is the isometry group of .
An element of is compact if it sends bounded sets to relatively compact sets. Equivalently, a bounded operator is compact if, for any bounded sequence , the sequence has a convergent subsequence. Many integral operators are compact, and in fact define a special class of operators known as Hilbert–Schmidt operators that are especially important in the study of integral equations. Fredholm operators differ from a compact operator by a multiple of the identity, and are equivalently characterized as operators with a finite dimensional kernel and cokernel. The index of a Fredholm operator is defined by
The index is homotopy invariant, and plays a deep role in differential geometry via the Atiyah–Singer index theorem.
Unbounded operators
Unbounded operators are also tractable in Hilbert spaces, and have important applications to quantum mechanics. An unbounded operator on a Hilbert space is defined as a linear operator whose domain is a linear subspace of . Often the domain is a dense subspace of , in which case is known as a densely defined operator.
The adjoint of a densely defined unbounded operator is defined in essentially the same manner as for bounded operators. Self-adjoint unbounded operators play the role of the observables in the mathematical formulation of quantum mechanics. Examples of self-adjoint unbounded operators on the Hilbert space are:
A suitable extension of the differential operator where is the imaginary unit and is a differentiable function of compact support.
The multiplication-by- operator:
These correspond to the momentum and position observables, respectively. Neither nor is defined on all of , since in the case of the derivative need not exist, and in the case of the product function need not be square integrable. In both cases, the set of possible arguments form dense subspaces of .
Constructions
Direct sums
Two Hilbert spaces and can be combined into another Hilbert space, called the (orthogonal) direct sum, and denoted
consisting of the set of all ordered pairs where , , and inner product defined by
More generally, if is a family of Hilbert spaces indexed by , then the direct sum of the , denoted
consists of the set of all indexed families
in the Cartesian product of the such that
The inner product is defined by
Each of the is included as a closed subspace in the direct sum of all of the . Moreover, the are pairwise orthogonal. Conversely, if there is a system of closed subspaces, , , in a Hilbert space , that are pairwise orthogonal and whose union is dense in , then is canonically isomorphic to the direct sum of . In this case, is called the internal direct sum of the . A direct sum (internal or external) is also equipped with a family of orthogonal projections onto the th direct summand . These projections are bounded, self-adjoint, idempotent operators that satisfy the orthogonality condition
The spectral theorem for compact self-adjoint operators on a Hilbert space states that splits into an orthogonal direct sum of the eigenspaces of an operator, and also gives an explicit decomposition of the operator as a sum of projections onto the eigenspaces. The direct sum of Hilbert spaces also appears in quantum mechanics as the Fock space of a system containing a variable number of particles, where each Hilbert space in the direct sum corresponds to an additional degree of freedom for the quantum mechanical system. In representation theory, the Peter–Weyl theorem guarantees that any unitary representation of a compact group on a Hilbert space splits as the direct sum of finite-dimensional representations.
Tensor products
If and , then one defines an inner product on the (ordinary) tensor product as follows. On simple tensors, let
This formula then extends by sesquilinearity to an inner product on . The Hilbertian tensor product of and , sometimes denoted by , is the Hilbert space obtained by completing for the metric associated to this inner product.
An example is provided by the Hilbert space . The Hilbertian tensor product of two copies of is isometrically and linearly isomorphic to the space of square-integrable functions on the square . This isomorphism sends a simple tensor to the function
on the square.
This example is typical in the following sense. Associated to every simple tensor product is the rank one operator from to that maps a given as
This mapping defined on simple tensors extends to a linear identification between and the space of finite rank operators from to . This extends to a linear isometry of the Hilbertian tensor product with the Hilbert space of Hilbert–Schmidt operators from to .
Orthonormal bases
The notion of an orthonormal basis from linear algebra generalizes over to the case of Hilbert spaces. In a Hilbert space , an orthonormal basis is a family of elements of satisfying the conditions:
Orthogonality: Every two different elements of are orthogonal: for all with .
Normalization: Every element of the family has norm 1: for all .
Completeness: The linear span of the family , , is dense in H.
A system of vectors satisfying the first two conditions basis is called an orthonormal system or an orthonormal set (or an orthonormal sequence if is countable). Such a system is always linearly independent.
Despite the name, an orthonormal basis is not, in general, a basis in the sense of linear algebra (Hamel basis). More precisely, an orthonormal basis is a Hamel basis if and only if the Hilbert space is a finite-dimensional vector space.
Completeness of an orthonormal system of vectors of a Hilbert space can be equivalently restated as:
for every , if for all , then .
This is related to the fact that the only vector orthogonal to a dense linear subspace is the zero vector, for if is any orthonormal set and is orthogonal to , then is orthogonal to the closure of the linear span of , which is the whole space.
Examples of orthonormal bases include:
the set forms an orthonormal basis of with the dot product;
the sequence with forms an orthonormal basis of the complex space ;
In the infinite-dimensional case, an orthonormal basis will not be a basis in the sense of linear algebra; to distinguish the two, the latter basis is also called a Hamel basis. That the span of the basis vectors is dense implies that every vector in the space can be written as the sum of an infinite series, and the orthogonality implies that this decomposition is unique.
Sequence spaces
The space of square-summable sequences of complex numbers is the set of infinite sequences
of real or complex numbers such that
This space has an orthonormal basis:
This space is the infinite-dimensional generalization of the space of finite-dimensional vectors. It is usually the first example used to show that in infinite-dimensional spaces, a set that is closed and bounded is not necessarily (sequentially) compact (as is the case in all finite dimensional spaces). Indeed, the set of orthonormal vectors above shows this: It is an infinite sequence of vectors in the unit ball (i.e., the ball of points with norm less than or equal one). This set is clearly bounded and closed; yet, no subsequence of these vectors converges to anything and consequently the unit ball in is not compact. Intuitively, this is because "there is always another coordinate direction" into which the next elements of the sequence can evade.
One can generalize the space in many ways. For example, if is any set, then one can form a Hilbert space of sequences with index set , defined by
The summation over B is here defined by
the supremum being taken over all finite subsets of . It follows that, for this sum to be finite, every element of has only countably many nonzero terms. This space becomes a Hilbert space with the inner product
for all . Here the sum also has only countably many nonzero terms, and is unconditionally convergent by the Cauchy–Schwarz inequality.
An orthonormal basis of is indexed by the set , given by
Bessel's inequality and Parseval's formula
Let be a finite orthonormal system in . For an arbitrary vector , let
Then for every . It follows that is orthogonal to each , hence is orthogonal to . Using the Pythagorean identity twice, it follows that
Let , be an arbitrary orthonormal system in . Applying the preceding inequality to every finite subset of gives Bessel's inequality:
(according to the definition of the sum of an arbitrary family of non-negative real numbers).
Geometrically, Bessel's inequality implies that the orthogonal projection of onto the linear subspace spanned by the has norm that does not exceed that of . In two dimensions, this is the assertion that the length of the leg of a right triangle may not exceed the length of the hypotenuse.
Bessel's inequality is a stepping stone to the stronger result called Parseval's identity, which governs the case when Bessel's inequality is actually an equality. By definition, if is an orthonormal basis of , then every element of may be written as
Even if is uncountable, Bessel's inequality guarantees that the expression is well-defined and consists only of countably many nonzero terms. This sum is called the Fourier expansion of , and the individual coefficients are the Fourier coefficients of . Parseval's identity then asserts that
Conversely, if is an orthonormal set such that Parseval's identity holds for every , then is an orthonormal basis.
Hilbert dimension
As a consequence of Zorn's lemma, every Hilbert space admits an orthonormal basis; furthermore, any two orthonormal bases of the same space have the same cardinality, called the Hilbert dimension of the space. For instance, since has an orthonormal basis indexed by , its Hilbert dimension is the cardinality of (which may be a finite integer, or a countable or uncountable cardinal number).
The Hilbert dimension is not greater than the Hamel dimension (the usual dimension of a vector space). The two dimensions are equal if and only if one of them is finite.
As a consequence of Parseval's identity, if is an orthonormal basis of , then the map defined by is an isometric isomorphism of Hilbert spaces: it is a bijective linear mapping such that
for all . The cardinal number of is the Hilbert dimension of . Thus every Hilbert space is isometrically isomorphic to a sequence space for some set .
Separable spaces
By definition, a Hilbert space is separable provided it contains a dense countable subset. Along with Zorn's lemma, this means a Hilbert space is separable if and only if it admits a countable orthonormal basis. All infinite-dimensional separable Hilbert spaces are therefore isometrically isomorphic to the square-summable sequence space
In the past, Hilbert spaces were often required to be separable as part of the definition.
In quantum field theory
Most spaces used in physics are separable, and since these are all isomorphic to each other, one often refers to any infinite-dimensional separable Hilbert space as "the Hilbert space" or just "Hilbert space". Even in quantum field theory, most of the Hilbert spaces are in fact separable, as stipulated by the Wightman axioms. However, it is sometimes argued that non-separable Hilbert spaces are also important in quantum field theory, roughly because the systems in the theory possess an infinite number of degrees of freedom and any infinite Hilbert tensor product (of spaces of dimension greater than one) is non-separable. For instance, a bosonic field can be naturally thought of as an element of a tensor product whose factors represent harmonic oscillators at each point of space. From this perspective, the natural state space of a boson might seem to be a non-separable space. However, it is only a small separable subspace of the full tensor product that can contain physically meaningful fields (on which the observables can be defined). Another non-separable Hilbert space models the state of an infinite collection of particles in an unbounded region of space. An orthonormal basis of the space is indexed by the density of the particles, a continuous parameter, and since the set of possible densities is uncountable, the basis is not countable.
Orthogonal complements and projections
If is a subset of a Hilbert space , the set of vectors orthogonal to is defined by
The set is a closed subspace of (can be proved easily using the linearity and continuity of the inner product) and so forms itself a Hilbert space. If is a closed subspace of , then is called the of . In fact, every can then be written uniquely as , with and . Therefore, is the internal Hilbert direct sum of and .
The linear operator that maps to is called the onto . There is a natural one-to-one correspondence between the set of all closed subspaces of and the set of all bounded self-adjoint operators such that . Specifically,
This provides the geometrical interpretation of : it is the best approximation to x by elements of V.
Projections and are called mutually orthogonal if . This is equivalent to and being orthogonal as subspaces of . The sum of the two projections and is a projection only if and are orthogonal to each other, and in that case . The composite is generally not a projection; in fact, the composite is a projection if and only if the two projections commute, and in that case .
By restricting the codomain to the Hilbert space , the orthogonal projection gives rise to a projection mapping ; it is the adjoint of the inclusion mapping
meaning that
for all and .
The operator norm of the orthogonal projection onto a nonzero closed subspace is equal to 1:
Every closed subspace V of a Hilbert space is therefore the image of an operator of norm one such that . The property of possessing appropriate projection operators characterizes Hilbert spaces:
A Banach space of dimension higher than 2 is (isometrically) a Hilbert space if and only if, for every closed subspace , there is an operator of norm one whose image is such that .
While this result characterizes the metric structure of a Hilbert space, the structure of a Hilbert space as a topological vector space can itself be characterized in terms of the presence of complementary subspaces:
A Banach space is topologically and linearly isomorphic to a Hilbert space if and only if, to every closed subspace , there is a closed subspace such that is equal to the internal direct sum .
The orthogonal complement satisfies some more elementary results. It is a monotone function in the sense that if , then with equality holding if and only if is contained in the closure of . This result is a special case of the Hahn–Banach theorem. The closure of a subspace can be completely characterized in terms of the orthogonal complement: if is a subspace of , then the closure of is equal to . The orthogonal complement is thus a Galois connection on the partial order of subspaces of a Hilbert space. In general, the orthogonal complement of a sum of subspaces is the intersection of the orthogonal complements:
If the are in addition closed, then
Spectral theory
There is a well-developed spectral theory for self-adjoint operators in a Hilbert space, that is roughly analogous to the study of symmetric matrices over the reals or self-adjoint matrices over the complex numbers. In the same sense, one can obtain a "diagonalization" of a self-adjoint operator as a suitable sum (actually an integral) of orthogonal projection operators.
The spectrum of an operator , denoted , is the set of complex numbers such that lacks a continuous inverse. If is bounded, then the spectrum is always a compact set in the complex plane, and lies inside the disc . If is self-adjoint, then the spectrum is real. In fact, it is contained in the interval where
Moreover, and are both actually contained within the spectrum.
The eigenspaces of an operator are given by
Unlike with finite matrices, not every element of the spectrum of must be an eigenvalue: the linear operator may only lack an inverse because it is not surjective. Elements of the spectrum of an operator in the general sense are known as spectral values. Since spectral values need not be eigenvalues, the spectral decomposition is often more subtle than in finite dimensions.
However, the spectral theorem of a self-adjoint operator takes a particularly simple form if, in addition, is assumed to be a compact operator. The spectral theorem for compact self-adjoint operators states:
A compact self-adjoint operator has only countably (or finitely) many spectral values. The spectrum of has no limit point in the complex plane except possibly zero. The eigenspaces of decompose into an orthogonal direct sum: Moreover, if denotes the orthogonal projection onto the eigenspace , then where the sum converges with respect to the norm on .
This theorem plays a fundamental role in the theory of integral equations, as many integral operators are compact, in particular those that arise from Hilbert–Schmidt operators.
The general spectral theorem for self-adjoint operators involves a kind of operator-valued Riemann–Stieltjes integral, rather than an infinite summation. The spectral family associated to associates to each real number λ an operator , which is the projection onto the nullspace of the operator , where the positive part of a self-adjoint operator is defined by
The operators are monotone increasing relative to the partial order defined on self-adjoint operators; the eigenvalues correspond precisely to the jump discontinuities. One has the spectral theorem, which asserts
The integral is understood as a Riemann–Stieltjes integral, convergent with respect to the norm on . In particular, one has the ordinary scalar-valued integral representation
A somewhat similar spectral decomposition holds for normal operators, although because the spectrum may now contain non-real complex numbers, the operator-valued Stieltjes measure must instead be replaced by a resolution of the identity.
A major application of spectral methods is the spectral mapping theorem, which allows one to apply to a self-adjoint operator any continuous complex function defined on the spectrum of by forming the integral
The resulting continuous functional calculus has applications in particular to pseudodifferential operators.
The spectral theory of unbounded self-adjoint operators is only marginally more difficult than for bounded operators. The spectrum of an unbounded operator is defined in precisely the same way as for bounded operators: is a spectral value if the resolvent operator
fails to be a well-defined continuous operator. The self-adjointness of still guarantees that the spectrum is real. Thus the essential idea of working with unbounded operators is to look instead at the resolvent where is nonreal. This is a bounded normal operator, which admits a spectral representation that can then be transferred to a spectral representation of itself. A similar strategy is used, for instance, to study the spectrum of the Laplace operator: rather than address the operator directly, one instead looks as an associated resolvent such as a Riesz potential or Bessel potential.
A precise version of the spectral theorem in this case is:
There is also a version of the spectral theorem that applies to unbounded normal operators.
In popular culture
In Gravity's Rainbow (1973), a novel by Thomas Pynchon, one of the characters is called "Sammy Hilbert-Spaess", a pun on "Hilbert Space". The novel refers also to Gödel's incompleteness theorems.
See also
Remarks
Notes
References
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
; originally published Monografje Matematyczne, vol. 7, Warszawa, 1937.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
External links
Hilbert space at Mathworld
245B, notes 5: Hilbert spaces by Terence Tao
Functional analysis
Linear algebra
Operator theory
Space | Hilbert space | [
"Physics",
"Mathematics"
] | 13,221 | [
"Functions and mappings",
"Functional analysis",
"Mathematical objects",
"Quantum mechanics",
"Mathematical relations",
"Linear algebra",
"Hilbert spaces",
"Algebra"
] |
20,600,093 | https://en.wikipedia.org/wiki/Sandwich%20theory | Sandwich theory describes the behaviour of a beam, plate, or shell which consists of three layers—two facesheets and one core. The most commonly used sandwich theory is linear and is an extension of first-order beam theory. The linear sandwich theory is of importance for the design and analysis of sandwich panels, which are of use in building construction, vehicle construction, airplane construction and refrigeration engineering.
Some advantages of sandwich construction are:
Sandwich cross-sections are composite. They usually consist of a low to moderate stiffness core which is connected with two stiff exterior facesheets. The composite has a considerably higher shear stiffness to weight ratio than an equivalent beam made of only the core material or the facesheet material. The composite also has a high tensile strength to weight ratio.
The high stiffness of the facesheet leads to a high bending stiffness to weight ratio for the composite.
The behavior of a beam with sandwich cross-section under a load differs from a beam with a constant elastic cross section. If the radius of curvature during bending is large compared to the thickness of the sandwich beam and the strains in the component materials are small, the deformation of a sandwich composite beam can be separated into two parts
deformations due to bending moments or bending deformation, and
deformations due to transverse forces, also called shear deformation.
Sandwich beam, plate, and shell theories usually assume that the reference stress state is one of zero stress. However, during curing, differences of temperature between the facesheets persist because of the thermal separation by the core material. These temperature differences, coupled with different linear expansions of the facesheets, can lead to a bending of the sandwich beam in the direction of the warmer facesheet. If the bending is constrained during the manufacturing process, residual stresses can develop in the components of a sandwich composite. The superposition of a reference stress state on the solutions provided by sandwich theory is possible when the problem is linear. However, when large elastic deformations and rotations are expected, the initial stress state has to be incorporated directly into the sandwich theory.
Engineering sandwich beam theory
In the engineering theory of sandwich beams, the axial strain is assumed to vary linearly over the cross-section of the beam as in Euler-Bernoulli theory, i.e.,
Therefore, the axial stress in the sandwich beam is given by
where is the Young's modulus which is a function of the location along the thickness of the beam. The bending moment in the beam is then given by
The quantity is called the flexural stiffness of the sandwich beam. The shear force is defined as
Using these relations, we can show that the stresses in a sandwich beam with a core of thickness and modulus and two facesheets each of thickness and modulus , are given by
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
!Derivation of engineering sandwich beam stresses
|-
|Since
we can write the axial stress as
The equation of equilibrium for a two-dimensional solid is given by
where is the shear stress. Therefore,
where is a constant of integration.
Therefore,
Let us assume that there are no shear tractions applied to the top face of the sandwich beam. The shear stress in the top facesheet is given by
At , implies that . Then the shear stress at the top of the core, , is given by
Similarly, the shear stress in the core can be calculated as
The integration constant is determined from the continuity of shear stress at the interface of the core and the facesheet. Therefore,
and
|}
For a sandwich beam with identical facesheets and unit width, the value of is
If , then can be approximated as
and the stresses in the sandwich beam can be approximated as
If, in addition, , then
and the approximate stresses in the beam are
If we assume that the facesheets are thin enough that the stresses may be assumed to be constant through the thickness, we have the approximation
Hence the problem can be split into two parts, one involving only core shear and the other involving only bending stresses in the facesheets.
Linear sandwich theory
Bending of a sandwich beam with thin facesheets
The main assumptions of linear sandwich theories of beams with thin facesheets are:
the transverse normal stiffness of the core is infinite, i.e., the core thickness in the z-direction does not change during bending
the in-plane normal stiffness of the core is small compared to that of the facesheets, i.e., the core does not lengthen or compress in the x-direction
the facesheets behave according to the Euler-Bernoulli assumptions, i.e., there is no xz-shear in the facesheets and the z-direction thickness of the facesheets does not change
However, the xz shear-stresses in the core are not neglected.
Constitutive assumptions
The constitutive relations for two-dimensional orthotropic linear elastic materials are
The assumptions of sandwich theory lead to the simplified relations
and
The equilibrium equations in two dimensions are
The assumptions for a sandwich beam and the equilibrium equation imply that
Therefore, for homogeneous facesheets and core, the strains also have the form
Kinematics
Let the sandwich beam be subjected to a bending moment and a shear force . Let the total deflection of the beam due to these loads be . The adjacent figure shows that, for small displacements, the total deflection of the mid-surface of the beam can be expressed as the sum of two deflections, a pure bending deflection and a pure shear deflection , i.e.,
From the geometry of the deformation we observe that the engineering shear strain () in the core is related the effective shear strain in the composite by the relation
Note the shear strain in the core is larger than the effective shear strain in the composite and that small deformations () are assumed in deriving the above relation. The effective shear strain in the beam is related to the shear displacement by the relation
The facesheets are assumed to deform in accordance with the assumptions of Euler-Bernoulli beam theory. The total deflection of the facesheets is assumed to be the superposition of the deflections due to bending and that due to core shear. The -direction displacements of the facesheets due to bending are given by
The displacement of the top facesheet due to shear in the core is
and that of the bottom facesheet is
The normal strains in the two facesheets are given by
Therefore,
Stress-displacement relations
The shear stress in the core is given by
or,
The normal stresses in the facesheets are given by
Hence,
Resultant forces and moments
The resultant normal force in a face sheet is defined as
and the resultant moments are defined as
where
Using the expressions for the normal stress in the two facesheets gives
In the core, the resultant moment is
The total bending moment in the beam is
or,
The shear force in the core is defined as
where is a shear correction coefficient. The shear force in the facesheets can be computed from the bending moments using the relation
or,
For thin facesheets, the shear force in the facesheets is usually ignored.
Bending and shear stiffness
The bending stiffness of the sandwich beam is given by
From the expression for the total bending moment in the beam, we have
For small shear deformations, the above expression can be written as
Therefore, the bending stiffness of the sandwich beam (with ) is given by
and that of the facesheets is
The shear stiffness of the beam is given by
Therefore, the shear stiffness of the beam, which is equal to the shear stiffness of the core, is
Relation between bending and shear deflections
A relation can be obtained between the bending and shear deflections by using the continuity of tractions between the core and the facesheets. If we equate the tractions directly we get
At both the facesheet-core interfaces but at the top of the core and at the bottom of the core . Therefore, traction continuity at leads to
The above relation is rarely used because of the presence of second derivatives of the shear deflection. Instead it is assumed that
which implies that
Governing equations
Using the above definitions, the governing balance equations for the bending moment and shear force are
We can alternatively express the above as two equations that can be solved for and as
Using the approximations
where is the intensity of the applied load on the beam, we have
Several techniques may be used to solve this system of two coupled ordinary differential equations given the applied load and the applied bending moment and displacement boundary conditions.
Temperature dependent alternative form of governing equations
Assuming that each partial cross section fulfills Bernoulli's hypothesis, the balance of forces and moments on the deformed sandwich beam element can be used to deduce the bending equation for the sandwich beam.
The stress resultants and the corresponding deformations of the beam and of the cross section can be seen in Figure 1. The following relationships can be derived using the theory of linear elasticity:
where
Superposition of the equations for the facesheets and the core leads to the following equations for the total shear force and the total bending moment :
We can alternatively express the above as two equations that can be solved for and , i.e.,
Solution approaches
The bending behavior and stresses in a continuous sandwich beam can be computed by solving the two governing differential equations.
Analytical approach
For simple geometries such as double span beams under uniformly distributed loads, the governing equations can be solved by using appropriate boundary conditions and using the superposition principle. Such results are listed in the standard DIN EN 14509:2006(Table E10.1). Energy methods may also be used to compute solutions directly.
Numerical approach
The differential equation of sandwich continuous beams can be solved by the use of numerical methods such as finite differences and finite elements. For finite differences Berner recommends a two-stage approach. After solving the differential equation for the normal forces in the cover sheets for a single span beam under a given load, the energy method can be used to expand the approach for the calculation of multi-span beams. Sandwich continuous beam with flexible cover sheets can also be laid on top of each other when using this technique. However, the cross-section of the beam has to be constant across the spans.
A more specialized approach recommended by Schwarze involves solving for the homogeneous part of the governing equation exactly and for the particular part approximately. Recall that the governing equation for a sandwich beam is
If we define
we get
Schwarze uses the general solution for the homogeneous part of the above equation and a polynomial approximation for the particular solution for sections of a sandwich beam. Interfaces between sections are tied together by matching boundary conditions. This approach has been used in the open source code swe2.
Practical importance
Results predicted by linear sandwich theory correlate well with the experimentally determined results. The theory is used as a basis for the structural report which is needed for the construction of large industrial and commercial buildings which are clad with sandwich panels . Its use is explicitly demanded for approvals and in the relevant engineering standards.
Mohammed Rahif Hakmi and others conducted researches into numerical, experimental behavior of materials and fire and blast behavior of Composite material. He published multiple research articles:
Local buckling of sandwich panels.
Face buckling stress in sandwich panels.
Post-buckling behaviour of foam-filled thin-walled steel beams.
"Fire resistance of composite floor slabs using a model fire test facility"
Fire-resistant sandwich panels for offshore structures sandwich panels.
Numerical Temperature Analysis of Hygroscopic Panels Exposed to Fire.
Cost Effective Use of Fibre Reinforced Composites Offshore.
Hakmi developed a design method, which had been recommended by the CIB Working Commission W056 Sandwich Panels, ECCS/CIB Joint Committee and has been used in the European recommendations for the design of sandwich panels (CIB, 2000).
See also
Bending
Beam theory
Composite material
Hill yield criterion
Sandwich structured composite
Sandwich plate system
Composite honeycomb
Timoshenko beam theory
Plate theory
Sandwich panel
References
Bibliography
Mohammed Rahif Hakmi
Klaus Berner, Oliver Raabe: Bemessung von Sandwichbauteilen. IFBS-Schrift 5.08, IFBS e.V., Düsseldorf 2006.
Ralf Möller, Hans Pöter, Knut Schwarze: Planen und Bauen mit Trapezprofilen und Sandwichelementen. Band 1, Ernst & Sohn, Berlin 2004, .
External links
Mohammed Rahif Hakmi Research for Sandwich Panels
Institute for Sandwich Technology
https://web.archive.org/web/20081120190919/http://www.diabgroup.com/europe/literature/e_pdf_files/man_pdf/sandwich_hb.pdf DIAB Sandwich Handbook
http://www.swe1.com Programm zur Ermittlung der Schnittgrössen und Spannungen von Sandwich-Wandplatten mit biegeweichen Deckschichten (open source)
http://www.swe2.com Computation of sandwich beams with corrugated faces (open source)
Mechanics
Structural engineering
Composite materials | Sandwich theory | [
"Physics",
"Engineering"
] | 2,722 | [
"Structural engineering",
"Composite materials",
"Construction",
"Materials",
"Civil engineering",
"Mechanics",
"Mechanical engineering",
"Matter"
] |
20,603,347 | https://en.wikipedia.org/wiki/Iodine%20trifluoride | Iodine trifluoride is an interhalogen compound with the chemical formula IF3. It is a yellow solid which decomposes above −28 °C. It can be synthesised from the elements, but care must be taken to avoid the formation of IF5.
Reactions
F2 reacts with I2 to yield IF3 at −45 °C in CCl3F. Alternatively, at low temperatures, the fluorination reaction I2 + 3XeF2 → 2IF3 + 3Xe can be used. Not much is known about iodine trifluoride as it is so unstable.
Structure
The iodine atom of iodine trifluoride has five electron pairs, of which two are lone-pairs, and the molecule is T-shaped as predicted by VSEPR Theory.
References
Fluorides
Interhalogen compounds
Iodine compounds | Iodine trifluoride | [
"Chemistry"
] | 176 | [
"Interhalogen compounds",
"Fluorides",
"Oxidizing agents",
"Salts"
] |
20,604,577 | https://en.wikipedia.org/wiki/Emulsion%20dispersion | An emulsion dispersion is thermoplastics or elastomers suspended in a liquid state by means of emulsifiers.
Preparation
Emulsions are thermodynamically unstable liquid/liquid dispersions that are stabilized.
Emulsion dispersion is not about reactor blends for which one polymer is polymerized from its monomer in the presence of the other polymers; emulsion dispersion is a novel method of choice for the preparation of homogeneous blends of thermoplastic and elastomer. In emulsion dispersion system the preparation of well-fined polymers droplets may be acquired by the use of water as dispersing medium. The surfactant molecules adsorb on the surface of emulsion by creating a dispersion of droplets, which reduces interfacial tension and retards particle flocculation during mixing. The molecules of surfactant have polar and non-polar parts which act as an intermediary to combine polar and non-polar polymers; the intermolecular interactions between the polar and the non-polar polymer segments resemble the macroscopic hydrocarbon-water interface. The idea of the emulsion dispersion inspired by emulsification of liquid natural rubber (LNR), from particle size analysis and optical microscopy results showed that the droplet size of emulsion of LNR with higher molecular weight is greater than that of the lower molecular weight. Emulsion dispersion was able to produce homogeneous low-density polyethylene (LDPE)/LNR blends and nylon 6/LNR blends. Results of differential scanning calorimetry (DSC) thermogram indicated a single glass transition temperature (Tg) showed that the blends were compatible and scanning electron microscopy (SEM) micrograph showed no phase separation between blend components. In addition, exfoliated HDPE/LNR/montmorillonite nanocomposites were successfully achieved by using emulsion dispersion technique as well.
References
Further reading
Colloidal chemistry | Emulsion dispersion | [
"Chemistry"
] | 421 | [
"Colloidal chemistry",
"Surface science",
"Colloids"
] |
20,604,781 | https://en.wikipedia.org/wiki/Executive%20Order%2013128 | Executive Order 13128 is a United States executive order (EO) issued by Bill Clinton in 1999. It authorized the Departments of State and Commerce to create regulations regarding the implementation of the Chemical Weapons Convention.
Background
The United States Senate ratified U.S. participation in the Chemical Weapons Convention (CWC) on April 25, 1997. On October 25, 1998 the U.S. Congress passed the Chemical Weapons Implementation Act of 1998, legislation which formally implemented the treaty's many provisions. Among those provisions were requirements for signatories to develop new regulations to deal with the transfer of chemicals and technologies that can be used for chemical warfare purposes.
Order
Executive Order 13128 was signed by then-U.S. President Bill Clinton on June 25, 1999. EO 13128 partially implemented the CWC, a treaty; treaties can be and are sometimes partially implemented by executive order. In addition, with its signing the order established the U.S. Department of State as the lead national agency for coordinating the implementation of and the provisions of both the CWC and the 1998 law with the various branches and agencies of the federal government. The executive order also authorized the U.S. Department of Commerce to establish regulations, obtain and execute warrants, provide assistance to certain facilities, and carry out other functions consistent with the CWC and the 1998 act.
Results
The Department of Commerce published an interim rule on December 30, 1999, through the Bureau of Industry and Security (BIS), which established the Chemical Weapons Convention Regulations (CWCR). The CWCR implemented all provisions of the CWC which affected U.S. persons and industry. The BIS rule was published after extensive comments from the U.S. Chemical Manufacturers Association and others. The U.S Department of State issued its own regulations which dealt with taking samples at chemical weapons sites as well as criminal and civil punishments for violation of the provisions of the CWC.
See also
Executive Order 11850
Geneva Protocol
Statement on Chemical and Biological Defense Policies and Programs
References
External links
Chemical Weapons Convention Implementation Act of 1998, full text
Chemical weapons demilitarization
13128 | Executive Order 13128 | [
"Chemistry"
] | 425 | [
"Chemical weapons demilitarization",
"Chemical weapons"
] |
15,482,602 | https://en.wikipedia.org/wiki/Anomalous%20photovoltaic%20effect | The anomalous photovoltaic effect (APE) is a type of a photovoltaic effect which occurs in certain semiconductors and insulators. The "anomalous" refers to those cases where the photovoltage (i.e., the open-circuit voltage caused by the light) is larger than the band gap of the corresponding semiconductor. In some cases, the voltage may reach thousands of volts.
Although the voltage is unusually high, the short-circuit current is unusually low. Overall, materials that exhibit the anomalous photovoltaic effect have very low power generation efficiencies, and are never used in practical power-generation systems.
There are several situations in which APE can arise.
First, in polycrystalline materials, each microscopic grain can act as a photovoltaic. Then the grains add in series, so that the overall open-circuit voltage across the sample is large, potentially much larger than the bandgap.
Second, in a similar manner, certain ferroelectric materials can develop stripes consisting of parallel ferroelectric domains, where each domain acts like a photovoltaic and each domain wall acts like a contact connecting the adjacent photovoltaics (or vice versa). Again, domains add in series, so that the overall open-circuit voltage is large.
Third, a perfect single crystal with a non-centrosymmetric structure can develop a giant photovoltage. This is specifically called the bulk photovoltaic effect (BPV effect), and occurs because of non-centrosymmetry. Specifically, the electron processes—photo-excitation, scattering, and relaxation—occur with different probabilities for electron motion in one direction versus the opposite direction.
The compound α-In2Se3 can be made to exhibit the bulk photovoltaic effect and outperform traditional solar cells.
Series-sum of grains in a polycrystal
History
This effect was discovered by Starkiewicz et al. in 1946 on PbS films and was later observed on other semiconducting polycrystalline films including CdTe, Silicon, Germanium, ZnTe and InP, as well as on amorphous silicon films and in nanocrystalline silicon systems. Observed photovoltages were found to reach hundreds, and in some cases even thousands of volts. The films in which this effect was observed were generally thin semiconducting films that were deposited by vacuum evaporation onto a heated insulating substrate, held at an angle with respect to the direction of the incident vapor. However, the photovoltage was found to be very sensitive to the conditions and procedure at which the samples were prepared. This made it difficult to get reproducible results which is probably the reason why no satisfactory model for it has been accepted thus far. Several models were, however, suggested to account for the extraordinary phenomenon and they are briefly outlined below.
The oblique deposition can lead to several structure asymmetries in the films. Among the first attempts to explain the APE were few that treated the film as a single entity, such as considering the variation of sample thickness along its length or a non-uniform distribution of electron traps. However, studies that followed generally supported models that explain the effect as resulting from a series of microelements contributing additively to the net photovoltage. The more popular models used to explain the photovoltage are reviewed below.
The Photo–Dember effect
When photogenerated electrons and holes have different mobilities, a potential difference can be created between the illuminated and non-illuminated faces of a semiconductor slab. Generally this potential is created through the depth of the slab, whether it is a bulk semiconductor or a polycrystalline film. The difference between these cases is that in the latter, a photovoltage can be created in each one of the microcrystallites. As was mentioned above, in the oblique deposition process inclined crystallites are formed in which one face can absorb light more than the other. This may cause a photovoltage to be generated along the film, as well as through its depth. The transfer of carriers at the surface of crystallites is assumed to be hindered by the presence of some unspecified layer with different properties, thus cancellation of consecutive Dember voltages is being prevented. To explain the polarity of the PV which is independent of the illumination direction one must assume that there exists a large difference in recombination rates at opposite faces of a crystallite, which is a weakness of this model.
The structure transition model
This model suggests that when a material crystallizes both in cubic and hexagonal structures, an asymmetric barrier can be formed by a residual dipole layer at the interface between the two structures. A potential barrier is formed due to a combination of the band gap difference and the electric fields produced at the interface. One should remember that this model can be invoked to explain anomalous PV effect only in those materials that can demonstrate two types of crystal structure.
The p-n junction model
It was suggested by Starkiewicz that the anomalous PV is developed due to a distribution gradient of positive and negative impurity ions through the microcrystallites, with an orientation such as to give a non-zero total photovoltage. This is equivalent to an array of p-n junctions. However, the mechanism by which such p-n junctions may be formed was not explained.
The surface photovoltage model
The interface between crystallites may contain traps for charge carriers. This may lead to a surface charge and an opposite space charge region in the crystallites, in case that the crystallites are small enough. Under illumination of the inclined crystallites electron-hole pairs are generated and cause a compensation of the charge in the surface and within the crystallites. If it is assumed that the optical absorption depth is much less than the space charge region in the crystallites, then, because of their inclined shape more light is absorbed in one side than in the other. Thus a difference in the reduction of the charge is created between the two sides. This way a photovoltage parallel to the surface is developed in each crystallite.
Bulk photovoltaic effect in a non-centrosymmetric single crystal
A perfect single crystal with a non-centrosymmetric structure can develop a giant photovoltage. This is specifically called the bulk photovoltaic effect, and occurs because of non-centrosymmetry. The electron processes like photo-excitation, scattering, and relaxation may occur with different probabilities for electrons moving one direction versus the opposite direction.
This effect was first discovered in the 1960s. It has been observed in lithium niobate (LiNbO3), barium titanate (BaTiO3) and many other materials.
Theoretical calculations using density functional theory or other methods can predict the extent to which a material will exhibit the bulk photovoltaic effect.
Simple example
Shown at right is an example of a simple system that would exhibit the bulk photovoltaic effect. There are two electronic levels per unit cell, separated by a large energy gap, say 3 eV. The blue arrows indicate radiative transitions, i.e. an electron can absorb a UV photon to go from A to B, or it can emit a UV photon to go from B to A. The purple arrows indicate nonradiative transitions, i.e. an electron can go from B to C by emitting many phonons, or can go from C to B by absorbing many phonons.
When light is shining, an electron in response to the time-varying electric field of light will occasionally move right by absorbing a photon and going from A to B to C. However, it will almost never move in the reverse direction, C to B to A, because the transition from C to B cannot be excited by photons, but instead requires an improbably large thermal fluctuation. Therefore, there is a net rightward photocurrent.
Because the electrons undergo a "shift" each time they absorb a photon (on average), this dc photocurrent with amplitude proportional to the square of the applied field is sometimes called a "shift current".
Distinguishing features
There are several aspects of the bulk photovoltaic effect that distinguish it from other kinds of effects: In the power-generating region of the I-V curve (between open-circuit and short-circuit), electrons are moving in the opposite direction that you would expect from the drift-diffusion equation, i.e. electrons are moving towards higher fermi level or holes are moving towards lower fermi level. This is unusual: For example, in a normal silicon solar cell, electrons move in the direction of decreasing electron-quasi-fermi level, and holes move in the direction of increasing hole-quasi-fermi-level, consistent with the drift-diffusion equation. Power generation is possible only because the quasi-fermi-levels are split. A bulk photovoltaic, by contrast, can generate power without any splitting of quasi-fermi-levels.
This also explains why large open-circuit voltages tend to be seen only in crystals that (in the dark) have very low conductivity: Any electrons that can freely move through the crystal (i.e., not requiring photons to move) will follow the drift-diffusion equation, which means that these electrons will subtract from the photocurrent and reduce the photovoltaic effect.
Each time one electron absorbs one photon (in the power-generating region of the I-V curve), the resulting electron displacement is, on average, at most one or two unit cells or mean-free-paths (this displacement is sometimes called the "anisotropy distance"). This is required because if an electron is excited into a mobile, delocalized state, and then it scatters a few times, then its direction is now randomized and it will naturally start following the drift-diffusion equation. However, in the bulk photovoltaic effect, the desired net electron motion is opposite the direction predicted by the drift-diffusion equation.
For example, it might be the case that when an electron absorbs a photon, it is disproportionately likely to wind up in a state where it is moving leftward. And perhaps each time a photon excites an electron, the electron moves leftward a bit and then immediately relaxes into ("gets stuck in") an immobile state—until it absorbs another photon and the cycle repeats. In this situation, a leftward electron current is possible despite an electric field pushing electrons in the opposite direction. However, when a photon excites an electron, it does not quickly relax back to an immobile state, but instead keeps moving around the crystal and scattering randomly, then the electron will eventually "forget" that it was moving left, and it will wind up being pulled rightward by the electric field. Again, the total leftward motion of an electron, per photon absorbed, cannot be much larger than the mean free path.
A consequence is that the quantum efficiency of a thick device is extremely low. It may require millions of photons to bring a single electron from one electrode to the other. As the thickness increases, the current goes down as much as the voltage goes up.
In some cases, the current has a different sign depending on the light polarization. This would not occur in an ordinary solar cell like silicon.
Applications
The bulk photovoltaic effect is believed to play a role in the photorefractive effect in lithium niobate.
See also
Semiconductors
Photovoltaic effect
Virtual particle
References
Semiconductors
Energy conversion | Anomalous photovoltaic effect | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,427 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
15,483,838 | https://en.wikipedia.org/wiki/Borel%20determinacy%20theorem | In descriptive set theory, the Borel determinacy theorem states that any Gale–Stewart game whose payoff set is a Borel set is determined, meaning that one of the two players will have a winning strategy for the game. A Gale–Stewart game is a possibly infinite two-player game, where both players have perfect information and no randomness is involved.
The theorem is a far reaching generalization of Zermelo's theorem about the determinacy of finite games. It was proved by Donald A. Martin in 1975, and is applied in descriptive set theory to show that Borel sets in Polish spaces have regularity properties such as the perfect set property.
The theorem is also known for its metamathematical properties. In 1971, before the theorem was proved, Harvey Friedman showed that any proof of the theorem in Zermelo–Fraenkel set theory must make repeated use of instances of the axiom schema of replacement. Later results showed that stronger determinacy theorems cannot be proven in Zermelo–Fraenkel set theory, although they are relatively consistent with it, if certain large cardinals are consistent.
Background
Gale–Stewart games
A Gale–Stewart game is a two-player game of perfect information. The game is defined using a set A, and is denoted GA. The two players alternate turns, and each player is aware of all moves before making the next one. On each turn, each player chooses a single element of A to play. The same element may be chosen more than once without restriction. The game can be visualized through the following diagram, in which the moves are made from left to right, with the moves of player I above and the moves of player II below.
The play continues without end, so that a single play of the game determines an infinite sequence of elements of A. The set of all such sequences is denoted Aω. The players are aware, from the beginning of the game, of a fixed payoff set (a.k.a. winning set) that will determine who wins. The payoff set is a subset of Aω. If the infinite sequence created by a play of the game is in the payoff set, then player I wins. Otherwise, player II wins; there are no ties.
This definition initially does not seem to include traditional perfect information games such as chess, since the set of moves available in such games changes every turn. However, this sort of case can be handled by declaring that a player who makes an illegal move loses immediately, so that the Gale–Stewart notion of a game does in fact generalize the concept of a game defined by a game tree.
Winning strategies
A winning strategy for a player is a function that tells the player what move to make from any position in the game, such that if the player follows the function they will surely win. More specifically, a winning strategy for player I is a function f that takes as input sequences of elements of A of even length and returns an element of A, such that player I will win every play of the form
A winning strategy for player II is a function g that takes odd-length sequences of elements of A and returns elements of A, such that player II will win every play of the form
At most one player can have a winning strategy; if both players had winning strategies, and played the strategies against each other, only one of the two strategies could win that play of the game. If one of the players has a winning strategy for a particular payoff set, that payoff set is said to be determined.
Topology
For a given set A, whether a subset of Aω will be determined depends to some extent on its topological structure. For the purposes of Gale–Stewart games, the set A is endowed with the discrete topology, and Aω endowed with the resulting product topology, where Aω is viewed as a countably infinite topological product of A with itself. In particular, when A is the set {0,1}, the topology defined on Aω is exactly the ordinary topology on Cantor space, and when A is the set of natural numbers, it is the ordinary topology on Baire space.
The set Aω can be viewed as the set of paths through a certain tree, which leads to a second characterization of its topology. The tree consists of all finite sequences of elements of A, and the children of a particular node σ of the tree are exactly the sequences that extend σ by one element. Thus if A = { 0, 1 }, the first level of the tree consists of the sequences 〈 0 〉 and 〈 1 〉; the second level consists of the four sequences 〈 0, 0 〉, 〈 0, 1 〉, 〈 1, 0 〉, 〈 1, 1 〉; and so on. For each of the finite sequences σ in the tree, the set of all elements of Aω that begin with σ is a basic open set in the topology on A. The open sets of Aω are precisely the sets expressible as unions of these basic open sets. The closed sets, as usual, are those whose complement is open.
The Borel sets of Aω are the smallest class of subsets of Aω that includes the open sets and is closed under complement and countable union. That is, the Borel sets are the smallest σ-algebra of subsets of Aω containing all the open sets. The Borel sets are classified in the Borel hierarchy based on how many times the operations of complement and countable union are required to produce them from open sets.
Previous results
Gale and Stewart (1953) proved that if the payoff set is an open or closed subset of Aω then the Gale–Stewart game with that payoff set is always determined. Over the next twenty years, this was extended to slightly higher levels of the Borel hierarchy through ever more complicated proofs. This led to the question of whether the game must be determined whenever the payoff set is a Borel subset of Aω. It was known that, using the axiom of choice, it is possible to construct a subset of {0,1}ω that is not determined (Kechris 1995, p. 139).
Harvey Friedman (1971) proved that any proof that all Borel subsets of Cantor space ({0,1}ω ) were determined would require repeated use of instances of the axiom schema of replacement, an axiom not typically required to prove theorems about "small" structures such as Cantor space that are not explicitly "set-theoretic" (that is, constructed for the purposes of exploring axiomatic set theory).
Borel determinacy
Donald A. Martin (1975) proved that for any set A, all Borel subsets of Aω are determined. Because the original proof was quite complicated, Martin published a shorter proof in 1982 that did not require as much technical machinery. In his review of Martin's paper, Drake describes the second proof as "surprisingly straightforward."
The field of descriptive set theory studies properties of Polish spaces (essentially, complete separable metric spaces). The Borel determinacy theorem has been used to establish many properties of Borel subsets of these spaces. For example, all analytic subsets of Polish spaces have the perfect set property, the property of Baire, and are Lebesgue measurable. However, the last two properties can be more easily proved without using Borel determinacy, by showing that the σ-algebras of measurable sets or sets with the Baire property are closed under Suslin's operation .
Set-theoretic aspects
The Borel determinacy theorem is of interest for its metamathematical properties as well as its consequences in descriptive set theory.
Determinacy of closed sets of Aω for arbitrary A is equivalent to the axiom of choice over ZF (Kechris 1995, p. 139). When working in set-theoretical systems where the axiom of choice is not assumed, this can be circumvented by considering generalized strategies known as quasistrategies (Kechris 1995, p. 139) or by only considering games where A is the set of natural numbers, as in the axiom of determinacy.
Zermelo set theory (Z) is (roughly) Zermelo–Fraenkel set theory without the axiom schema of replacement. One way it differs from ZF is that Z does not prove that the power set operation can be iterated infinitely many times beginning with an arbitrary infinite set. In particular, Vω + ω, a particular level of the cumulative hierarchy with countable rank, is a model of Zermelo set theory. The axiom schema of replacement, on the other hand, is only satisfied by Vκ for significantly larger values of κ, such as when κ is a strongly inaccessible cardinal. Friedman's theorem of 1971 showed that there is a model of Zermelo set theory (with the axiom of choice) in which Borel determinacy fails, and thus Zermelo set theory alone cannot prove the Borel determinacy theorem.
The existence of all beth numbers of countable index is sufficient to prove the Borel determinacy theorem.
Stronger forms of determinacy
Several set-theoretic principles about determinacy stronger than Borel determinacy are studied in descriptive set theory. They are closely related to large cardinal axioms.
The axiom of projective determinacy states that all projective subsets of a Polish space are determined. It is known to be unprovable in ZFC but relatively consistent with it and implied by certain large cardinal axioms. The existence of a measurable cardinal is enough to imply over ZFC the result that all analytic subsets of Polish spaces are determined, which is weaker than full projective determinacy.
The axiom of determinacy states that all subsets of all Polish spaces are determined. It is inconsistent with ZFC but in ZF + DC (Zermelo–Fraenkel set theory plus the axiom of dependent choice) it is equiconsistent with certain large cardinal axioms.
References
L. Bukovský, reviewer, Mathematical Reviews, .
S. Sherman, reviewer, Mathematical Reviews, .
John Burgess, reviewer. Mathematical Reviews, .
F. R. Drake, reviewer, Mathematical Reviews, .
External links
Borel determinacy and metamathematics. Ross Bryant. Master's thesis, University of North Texas, 2001.
"Large Cardinals and Determinacy" at the Stanford Encyclopedia of Philosophy
Determinacy
Theorems in the foundations of mathematics | Borel determinacy theorem | [
"Mathematics"
] | 2,166 | [
"Mathematical theorems",
"Foundations of mathematics",
"Mathematical logic",
"Game theory",
"Determinacy",
"Mathematical problems",
"Theorems in the foundations of mathematics"
] |
15,488,971 | https://en.wikipedia.org/wiki/Goldstine%20theorem | In functional analysis, a branch of mathematics, the Goldstine theorem, named after Herman Goldstine, is stated as follows:
Goldstine theorem. Let be a Banach space, then the image of the closed unit ball under the canonical embedding into the closed unit ball of the bidual space is a weak*-dense subset.
The conclusion of the theorem is not true for the norm topology, which can be seen by considering the Banach space of real sequences that converge to zero, c0 space and its bi-dual space Lp space
Proof
Lemma
For all and there exists an such that for all
Proof of lemma
By the surjectivity of
it is possible to find with for
Now let
Every element of satisfies and so it suffices to show that the intersection is nonempty.
Assume for contradiction that it is empty. Then and by the Hahn–Banach theorem there exists a linear form such that and Then and therefore
which is a contradiction.
Proof of theorem
Fix and Examine the set
Let be the embedding defined by where is the evaluation at map. Sets of the form form a base for the weak* topology, so density follows once it is shown for all such The lemma above says that for any there exists a such that and in particular Since we have We can scale to get The goal is to show that for a sufficiently small we have
Directly checking, one has
Note that one can choose sufficiently large so that for Note as well that If one chooses so that then
Hence one gets as desired.
See also
References
Banach spaces
Theorems in functional analysis
de:Schwach-*-Topologie#Eigenschaften | Goldstine theorem | [
"Mathematics"
] | 342 | [
"Theorems in mathematical analysis",
"Theorems in functional analysis"
] |
8,014,489 | https://en.wikipedia.org/wiki/Mixmaster%20universe | The Mixmaster universe (named after Sunbeam Mixmaster, a brand of Sunbeam Products electric kitchen mixer) is a solution to Einstein field equations of general relativity studied by Charles Misner in 1969 in an effort to better understand the dynamics of the early universe. He hoped to solve the horizon problem in a natural way by showing that the early universe underwent an oscillatory, chaotic epoch.
Discussion
The model is similar to the closed Friedmann–Lemaître–Robertson–Walker universe, in that spatial slices are positively curved and are topologically three-spheres . However, in the FRW universe, the can only expand or contract: the only dynamical parameter is overall size of the , parameterized by the scale factor . In the Mixmaster universe, the can expand or contract, but also distort anisotropically. Its evolution is described by a scale factor as well as by two shape parameters . Values of the shape parameters describe distortions of the that preserve its volume and also maintain a constant Ricci curvature scalar. Therefore, as the three parameters assume different values, homogeneity but not isotropy is preserved.
The model has a rich dynamical structure. Misner showed that the shape parameters act like the coordinates of a point mass moving in a triangular potential with steeply rising walls with friction. By studying the motion of this point, Misner showed that the physical universe would expand in some directions and contract in others, with the directions of expansion and contraction changing repeatedly. Because the potential is roughly triangular, Misner suggested that the evolution is chaotic.
Metric
The metric studied by Misner (very slightly modified from his notation) is given by,
where
and the , considered as differential forms, are defined by
In terms of the coordinates . These satisfy
where is the exterior derivative and the wedge product of differential forms. The 1-forms form a left-invariant co-frame on the Lie group SU(2), which is diffeomorphic to the 3-sphere , so the spatial metric in Misner's model can concisely be described as just a left-invariant metric on the 3-sphere; indeed, up to the adjoint action of SU(2), this is actually the left-invariant metric. As the metric evolves via Einstein's equation, the geometry of this typically distorts anisotropically. Misner defines parameters and which measure the volume of spatial slices, as well as "shape parameters" , by
.
Since there is one condition on the three , there should only be two free functions, which Misner chooses to be , defined as
The evolution of the universe is then described by finding as functions of .
Applications to cosmology
Misner hoped that the chaos would churn up and smooth out the early universe. Also, during periods in which one direction was static (e.g., going from expansion to contraction) formally the Hubble horizon in that direction is infinite, which he suggested meant that the horizon problem could be solved. Since the directions of expansion and contraction varied, presumably given enough time the horizon problem would get solved in every direction.
While an interesting example of gravitational chaos, it is widely recognized that the cosmological problems the Mixmaster universe attempts to solve are more elegantly tackled by cosmic inflation. The metric Misner studied is also known as the Bianchi type IX metric.
See also
Bianchi classification
BKL singularity
References
Exact solutions in general relativity
Chaotic maps | Mixmaster universe | [
"Mathematics"
] | 706 | [
"Exact solutions in general relativity",
"Functions and mappings",
"Mathematical objects",
"Equations",
"Mathematical relations",
"Chaotic maps",
"Dynamical systems"
] |
8,019,439 | https://en.wikipedia.org/wiki/Functional%20response | A functional response in ecology is the intake rate of a consumer as a function of food density (the amount of food available in a given ecotope). It is associated with the numerical response, which is the reproduction rate of a consumer as a function of food density. Following C. S. Holling, functional responses are generally classified into three types, which are called Holling's type I, II, and III.
Type I
The type I functional response assumes a linear increase in intake rate with food density, either for all food densities, or only for food densities up to a maximum, beyond which the intake rate is constant. The linear increase assumes that the time needed by the consumer to process a food item is negligible, or that consuming food does not interfere with searching for food. A functional response of type I is used in the Lotka–Volterra predator–prey model. It was the first kind of functional response described and is also the simplest of the three functional responses currently detailed.
Type II
The type II functional response is characterized by a decelerating intake rate, which follows from the assumption that the consumer is limited by its capacity to process food. Type II functional response is often modeled by a rectangular hyperbola, for instance as by Holling's disc equation, which assumes that processing of food and searching for food are mutually exclusive behaviors. The equation is
where f denotes intake rate and R denotes food (or resource) density. The rate at which the consumer encounters food items per unit of food density is called the attack rate, a. The average time spent on processing a food item is called the handling time, h. Similar equations are the Monod equation for the growth of microorganisms and the Michaelis–Menten equation for the rate of enzymatic reactions.
In an example with wolves and caribou, as the number of caribou increases while holding wolves constant, the number of caribou kills increases and then levels off. This is because the proportion of caribou killed per wolf decreases as caribou density increases. The higher the density of caribou, the smaller the proportion of caribou killed per wolf. Explained slightly differently, at very high caribou densities, wolves need very little time to find prey and spend almost all their time handling prey and very little time searching. Wolves are then satiated and the total number of caribou kills reaches a plateau.
Type III
The type III functional response is similar to type II in that at high levels of prey density, saturation occurs. At low prey density levels, the graphical relationship of number of prey consumed and the density of the prey population is a superlinearly increasing function of prey consumed by predators:
This accelerating function was originally formulated in analogy with of the kinetics of an enzyme with two binding sites for . More generally, if a prey type is only accepted after every encounters and rejected the -1 times in between, which mimicks learning, the general form above is found.
Learning time is defined as the natural improvement of a predator's searching and attacking efficiency or the natural improvement in their handling efficiency as prey density increases. Imagine a prey density so small that the chance of a predator encountering that prey is extremely low. Because the predator finds prey so infrequently, it has not had enough experience to develop the best ways to capture and subdue that species of prey. Holling identified this mechanism in shrews and deer mice feeding on sawflies. At low numbers of sawfly cocoons per acre, deer mice especially experienced exponential growth in terms of the number of cocoons consumed per individual as the density of cocoons increased. The characteristic saturation point of the type III functional response was also observed in the deer mice. At a certain density of cocoons per acre, the consumption rate of the deer mice reached a saturation amount as the cocoon density continued to increase.
Prey switching involves two or more prey species and one predator species. When all prey species are at equal densities, the predator will indiscriminately select between prey species. However, if the density of one of the prey species decreases, then the predator will start selecting the other, more common prey species with a higher frequency because if it can increase the efficiency which with it captures the more abundant prey through learning. Murdoch demonstrated this effect with guppy preying on tubificids and fruit flies. As fruit fly numbers decreased guppies switched from feeding on the fruit flies on the water's surface to feeding on the more abundant tubificids along the bed.
If predators learn while foraging, but do not reject prey before they accept one, the functional response becomes a function of the density of all prey types. This describes predators that feed on multiple prey and dynamically switch from one prey type to another. This behaviour can lead to either a type II or a type III functional response. If the density of one prey type is approximately constant, as is often the case in experiments, a type III functional response is found. When the prey densities change in approximate proportion to each other, as is the case in most natural situations, a type II functional response is typically found. This explains why the type III functional response has been found in many experiments in which prey densities are artificially manipulated, but is rare in nature.
See also
Ecosystem model
Lotka–Volterra equations
Predator satiation
References
Predation
Conceptual models
Systems ecology | Functional response | [
"Environmental_science"
] | 1,113 | [
"Environmental social science",
"Systems ecology"
] |
8,023,439 | https://en.wikipedia.org/wiki/Anti-shock%20body | Anti-shock body is the name given by Richard T. Whitcomb to a pod positioned on the upper surface of a wing. Its purpose is to reduce wave drag while travelling at transonic speeds (Mach 0.8–1.0), which includes the typical cruising range of conventional jet airliners. The Cambridge Aerospace Dictionary defines shock body (also known as Whitcomb body, Küchemann carrot or speed bump) as a streamlined volume added to improve area rule distribution.
The anti-shock, or shock, body was one of a number of ways of implementing what was then the recently developed area rule. Another was fuselage shaping.
Theory
The theory behind the anti-shock body was independently developed during the early 1950s, by two aerodynamists, Richard Whitcomb at NASA and Dietrich Küchemann at the British Royal Aircraft Establishment. The anti-shock body is closely associated with the area rule, a recent innovation of the era to minimise wave drag by having a cross-sectional area which changes smoothly along the length of the aircraft. The extension beyond the trailing edge was considered secondary to the body on the wing surface, which slowed the supersonic flow to give a weaker shock and acted as a fence to prevent outward flow. The extension was only long enough to prevent flow separation.
Whitcomb stated that the anti-shock body was no longer required on the top surface of a wing when the supercritical airfoil was introduced because they both decreased the strength of, or eliminated, the shock and its attendant drag.
Applications
Aircraft that have used anti-shock bodies are the Convair 990 and Fokker 100 airliners.
Küchemann carrots were added to the Handley Page Victor to provide volume for carrying chaff. They did not improve the performance of the aircraft, and when they became redundant for their intended purpose they were left in place to save the cost of removing them.
Several Tupolev aircraft of the Soviet Union utilized Küchemann carrots as gear storage pods, which were mounted mid-wing and extended past the trailing surface. Examples are the Tu-104, Tu-134 and Tu-154 airliners and the Tu-16 and Tu-95 bombers.
Boeing tested the effect of adding similar bodies to a wind tunnel model of the Boeing 707. Although the speed beyond which the drag rose abruptly was increased, the additional friction drag on the surface area of the bodies cancelled out any advantage.
Alternative
Modern jet aircraft use supercritical airfoils to minimize drag from shockwaves on the upper surface.
Gallery
References
Citations
Bibliography
ap Rees, Elfan. "Handley Page Victor: Part 2". Air Pictorial, June 1972, Vol. 34, No 6., pp. 220–226.
Barnard, R. H. and D. R. Philpott. Aircraft Flight: A Description of the Physical Principles of Aircraft Flight. Pearson Education, 2010. .
Aerodynamics
Aircraft wing design | Anti-shock body | [
"Chemistry",
"Engineering"
] | 598 | [
"Aerospace engineering",
"Aerodynamics",
"Fluid dynamics"
] |
3,452,035 | https://en.wikipedia.org/wiki/Topological%20string%20theory | In theoretical physics, topological string theory is a version of string theory. Topological string theory appeared in papers by theoretical physicists, such as Edward Witten and Cumrun Vafa, by analogy with Witten's earlier idea of topological quantum field theory.
Overview
There are two main versions of topological string theory: the topological A-model and the topological B-model. The results of the calculations in topological string theory generically encode all holomorphic quantities within the full string theory whose values are protected by spacetime supersymmetry. Various calculations in topological string theory are closely related to Chern–Simons theory, Gromov–Witten invariants, mirror symmetry, geometric Langlands Program, and many other topics.
The operators in topological string theory represent the algebra of operators in the full string theory that preserve a certain amount of supersymmetry. Topological string theory is obtained by a topological twist of the worldsheet description of ordinary string theory: the operators are given different spins. The operation is fully analogous to the construction of topological field theory which is a related concept. Consequently, there are no local degrees of freedom in topological string theory.
Admissible spacetimes
The fundamental strings of string theory are two-dimensional surfaces. A quantum field theory known as the N = (1,1) sigma model is defined on each surface. This theory consist of maps from the surface to a supermanifold. Physically the supermanifold is interpreted as spacetime and each map is interpreted as the embedding of the string in spacetime.
Only special spacetimes admit topological strings. Classically, one must choose a spacetime such that the theory respects an additional pair of supersymmetries, making the spacetime an N = (2,2) sigma model. A particular case of this is if the spacetime is a Kähler manifold and the H-flux is identically equal to zero. Generalized Kähler manifolds can have a nontrivial H-flux.
Topological twist
Ordinary strings on special backgrounds are never topological. To make these strings topological, one needs to modify the sigma model via a procedure called a topological twist which was invented by Edward Witten in 1988. The central observation is that these theories have two U(1) symmetries known as R-symmetries, and the Lorentz symmetry may be modified by mixing rotations and R-symmetries. One may use either of the two R-symmetries, leading to two different theories, called the A model and the B model. After this twist, the action of the theory is BRST exact, and as a result the theory has no dynamics. Instead, all observables depend on the topology of a configuration. Such theories are known as topological theories.
Classically this procedure is always possible.
Quantum mechanically, the U(1) symmetries may be anomalous, making the twist impossible. For example, in the Kähler case with H = 0 the twist leading to the A-model is always possible but that leading to the B-model is only possible when the first Chern class of the spacetime vanishes, implying that the spacetime is Calabi–Yau. More generally (2,2) theories have two complex structures and the B model exists when the first Chern classes of associated bundles sum to zero whereas the A model exists when the difference of the Chern classes is zero. In the Kähler case the two complex structures are the same and so the difference is always zero, which is why the A model always exists.
There is no restriction on the number of dimensions of spacetime, other than that it must be even because spacetime is generalized Kähler. However, all correlation functions with worldsheets that are not spheres vanish unless the complex dimension of the spacetime is three, and so spacetimes with complex dimension three are the most interesting. This is fortunate for phenomenology, as phenomenological models often use a physical string theory compactified on a 3 complex-dimensional space. The topological string theory is not equivalent to the physical string theory, even on the same space, but certain supersymmetric quantities agree in the two theories.
Objects
A-model
The topological A-model comes with a target space which is a 6 real-dimensional generalized Kähler spacetime. In the case in which the spacetime is Kähler, the theory describes two objects. There are fundamental strings, which wrap two real-dimensional holomorphic curves. Amplitudes for the scattering of these strings depend only on the Kähler form of the spacetime, and not on the complex structure. Classically these correlation functions are determined by the cohomology ring. There are quantum mechanical instanton effects which correct these and yield Gromov–Witten invariants, which measure the cup product in a deformed cohomology ring called the quantum cohomology. The string field theory of the A-model closed strings is known as Kähler gravity, and was introduced by Michael Bershadsky and Vladimir Sadov in Theory of Kähler Gravity.
In addition, there are D2-branes which wrap Lagrangian submanifolds of spacetime. These are submanifolds whose dimensions are one half that of space time, and such that the pullback of the Kähler form to the submanifold vanishes. The worldvolume theory on a stack of N D2-branes is the string field theory of the open strings of the A-model, which is a U(N) Chern–Simons theory.
The fundamental topological strings may end on the D2-branes. While the embedding of a string depends only on the Kähler form, the embeddings of the branes depends entirely on the complex structure. In particular, when a string ends on a brane the intersection will always be orthogonal, as the wedge product of the Kähler form and the holomorphic 3-form is zero. In the physical string this is necessary for the stability of the configuration, but here it is a property of Lagrangian and holomorphic cycles on a Kahler manifold.
There may also be coisotropic branes in various dimensions other than half dimensions of Lagrangian submanifolds. These were first introduced by Anton Kapustin and Dmitri Orlov in Remarks on A-Branes, Mirror Symmetry, and the Fukaya Category
B-model
The B-model also contains fundamental strings, but their scattering amplitudes depend entirely upon the complex structure and are independent of the Kähler structure. In particular, they are insensitive to worldsheet instanton effects and so can often be calculated exactly. Mirror symmetry then relates them to A model amplitudes, allowing one to compute Gromov–Witten invariants. The string field theory of the closed strings of the B-model is known as the Kodaira–Spencer theory of gravity and was developed by Michael Bershadsky, Sergio Cecotti, Hirosi Ooguri and Cumrun Vafa in Kodaira–Spencer Theory of Gravity and Exact Results for Quantum String Amplitudes.
The B-model also comes with D(-1), D1, D3 and D5-branes, which wrap holomorphic 0, 2, 4 and 6-submanifolds respectively. The 6-submanifold is a connected component of the spacetime. The theory on a D5-brane is known as holomorphic Chern–Simons theory. The Lagrangian density is the wedge product of that of ordinary Chern–Simons theory with the holomorphic (3,0)-form, which exists in the Calabi–Yau case. The Lagrangian densities of the theories on the lower-dimensional branes may be obtained from holomorphic Chern–Simons theory by dimensional reductions.
Topological M-theory
Topological M-theory, which enjoys a seven-dimensional spacetime, is not a topological string theory, as it contains no topological strings. However topological M-theory on a circle bundle over a 6-manifold has been conjectured to be equivalent to the topological A-model on that 6-manifold.
In particular, the D2-branes of the A-model lift to points at which the circle bundle degenerates, or more precisely Kaluza–Klein monopoles. The fundamental strings of the A-model lift to membranes named M2-branes in topological M-theory.
One special case that has attracted much interest is topological M-theory on a space with G2 holonomy and the A-model on a Calabi–Yau. In this case, the M2-branes wrap associative 3-cycles. Strictly speaking, the topological M-theory conjecture has only been made in this context, as in this case functions introduced by Nigel Hitchin in The Geometry of Three-Forms in Six and Seven Dimensions and Stable Forms and Special Metrics provide a candidate low energy effective action.
These functions are called "Hitchin functional" and Topological string is closely related to Hitchin's ideas on generalized complex structure, Hitchin system, and ADHM construction etc..
Observables
The topological twist
The 2-dimensional worldsheet theory is an N = (2,2) supersymmetric sigma model, the (2,2) supersymmetry means that the fermionic generators of the supersymmetry algebra, called supercharges, may be assembled into a single Dirac spinor, which consists of two Majorana–Weyl spinors of each chirality. This sigma model is topologically twisted, which means that the Lorentz symmetry generators that appear in the supersymmetry algebra simultaneously rotate the physical spacetime and also rotate the fermionic directions via the action of one of the R-symmetries. The R-symmetry group of a 2-dimensional N = (2,2) field theory is U(1) × U(1), twists by the two different factors lead to the A and B models respectively. The topological twisted construction of topological string theories was introduced by Edward Witten in his 1988 paper.
What do the correlators depend on?
The topological twist leads to a topological theory because the stress–energy tensor may be written as an anticommutator of a supercharge and another field. As the stress–energy tensor measures the dependence of the action on the metric tensor, this implies that all correlation functions of Q-invariant operators are independent of the metric. In this sense, the theory is topological.
More generally, any D-term in the action, which is any term which may be expressed as an integral over all of superspace, is an anticommutator of a supercharge and so does not affect the topological observables. Yet more generally, in the B model any term which may be written as an integral over the fermionic coordinates does not contribute, whereas in the A-model any term which is an integral over or over does not contribute. This implies that A model observables are independent of the superpotential (as it may be written as an integral over just ) but depend holomorphically on the twisted superpotential, and vice versa for the B model.
Dualities
Dualities between TSTs
A number of dualities relate the above theories. The A-model and B-model on two mirror manifolds are related by mirror symmetry, which has been described as a T-duality on a three-torus. The A-model and B-model on the same manifold are conjectured to be related by S-duality, which implies the existence of several new branes, called NS branes by analogy with the NS5-brane, which wrap the same cycles as the original branes but in the opposite theory. Also a combination of the A-model and a sum of the B-model and its conjugate are related to topological M-theory by a kind of dimensional reduction. Here the degrees of freedom of the A-model and the B-models appear to not be simultaneously observable, but rather to have a relation similar to that between position and momentum in quantum mechanics.
The holomorphic anomaly
The sum of the B-model and its conjugate appears in the above duality because it is the theory whose low energy effective action is expected to be described by Hitchin's formalism. This is because the B-model suffers from a holomorphic anomaly, which states that the dependence on complex quantities, while classically holomorphic, receives nonholomorphic quantum corrections. In Quantum Background Independence in String Theory, Edward Witten argued that this structure is analogous to a structure that one finds geometrically quantizing the space of complex structures. Once this space has been quantized, only half of the dimensions simultaneously commute and so the number of degrees of freedom has been halved. This halving depends on an arbitrary choice, called a polarization. The conjugate model contains the missing degrees of freedom, and so by tensoring the B-model and its conjugate one reobtains all of the missing degrees of freedom and also eliminates the dependence on the arbitrary choice of polarization.
Geometric transitions
There are also a number of dualities that relate configurations with D-branes, which are described by open strings, to those with branes the branes replaced by flux and with the geometry described by the near-horizon geometry of the lost branes. The latter are described by closed strings.
Perhaps the first such duality is the Gopakumar-Vafa duality, which was introduced by Rajesh Gopakumar and Cumrun Vafa in On the Gauge Theory/Geometry Correspondence. This relates a stack of N D6-branes on a 3-sphere in the A-model on the deformed conifold to the closed string theory of the A-model on a resolved conifold with a B field equal to N times the string coupling constant.
The open strings in the A model are described by a U(N) Chern–Simons theory, while the closed string theory on the A-model is described by the Kähler gravity.
Although the conifold is said to be resolved, the area of the blown up two-sphere is zero, it is only the B-field, which is often considered to be the complex part of the area, which is nonvanishing. In fact, as the Chern–Simons theory is topological, one may shrink the volume of the deformed three-sphere to zero and so arrive at the same geometry as in the dual theory.
The mirror dual of this duality is another duality, which relates open strings in the B model on a brane wrapping the 2-cycle in the resolved conifold to closed strings in the B model on the deformed conifold. Open strings in the B-model are described by dimensional reductions of homolomorphic Chern–Simons theory on the branes on which they end, while closed strings in the B model are described by Kodaira–Spencer gravity.
Dualities with other theories
Crystal melting, quantum foam and U(1) gauge theory
In the paper Quantum Calabi–Yau and Classical Crystals, Andrei Okounkov, Nicolai Reshetikhin and Cumrun Vafa conjectured that the quantum A-model is dual to a classical melting crystal at a temperature equal to the inverse of the string coupling constant. This conjecture was interpreted in Quantum Foam and Topological Strings, by Amer Iqbal, Nikita Nekrasov, Andrei Okounkov and Cumrun Vafa. They claim that the statistical sum over melting crystal configurations is equivalent to a path integral over changes in spacetime topology supported in small regions with area of order the product of the string coupling constant and α'.
Such configurations, with spacetime full of many small bubbles, dates back to John Archibald Wheeler in 1964, but has rarely appeared in string theory as it is notoriously difficult to make precise. However in this duality the authors are able to cast the dynamics of the quantum foam in the familiar language of a topologically twisted U(1) gauge theory, whose field strength is linearly related to the Kähler form of the A-model. In particular this suggests that the A-model Kähler form should be quantized.
Applications
A-model topological string theory amplitudes are used to compute prepotentials in N=2 supersymmetric gauge theories in four and five dimensions. The amplitudes of the topological B-model, with fluxes and or branes, are used to compute superpotentials in N=1 supersymmetric gauge theories in four dimensions. Perturbative A model calculations also count BPS states of spinning black holes in five dimensions.
See also
Quantum topology
Topological defect
Topological entropy in physics
Topological order
Topological quantum field theory
Topological quantum number
Introduction to M-theory
References
Topological string theory on arxiv.org
Mathematical physics
String theory | Topological string theory | [
"Physics",
"Astronomy",
"Mathematics"
] | 3,526 | [
"Astronomical hypotheses",
"Applied mathematics",
"Theoretical physics",
"String theory",
"Mathematical physics"
] |
3,452,586 | https://en.wikipedia.org/wiki/1%2C5-Diazabicyclo%284.3.0%29non-5-ene | 1,5-Diazabicyclo[4.3.0]non-5-ene (DBN) is a chemical compound with the formula CHN. It is an amidine base used in organic synthesis. A related compound with related functions is 1,8-diazabicyclo[5.4.0]undec-7-ene (DBU). The relatively complex nature of the formal names for DBU and DBN (hence the common use of acronyms) reflects the fact that these compounds are bicyclic and contain several functional groups.
Synthesis
DBN could be synthesized in the following manner, similarly to DBU:
The synthetic procedure starts with a Michael addition of 2-pyrrolidone to acrylonitrile, followed by hydrogenation, and finally dehydration.
Uses
As a base in organic synthesis
Similar to many other organic bases, DBN could be employed for dehydrohalogenation reactions, base-catalyzed rearrangement reactions, as well as Aldol condensation. Several examples are shown below:
Elimination:
Epimerization of penicillin derivatives, catalyzed by DBN:
Aldol condensation:
DBN salt as an ionic liquid
The acetate salt of DBN is a room-temperature ionic liquid used for processing cellulose fibers by acting as a replacement for the unstable N-Methylmorpholine N-oxide used for making lyocell.
References
Amidines
Non-nucleophilic bases
Pyrrolopyrimidines | 1,5-Diazabicyclo(4.3.0)non-5-ene | [
"Chemistry"
] | 325 | [
"Non-nucleophilic bases",
"Amidines",
"Functional groups",
"Reagents for organic chemistry",
"Bases (chemistry)"
] |
3,452,748 | https://en.wikipedia.org/wiki/Hydrazone%20iodination | Hydrazone iodination is an organic reaction in which a hydrazone is converted into a vinyl iodide by reaction of iodine and a non-nucleophilic base such as DBU. First published by Derek Barton in 1962 the reaction is sometimes referred to as the Barton reaction (although there are many different Barton reactions) or, more descriptively, as the Barton vinyl iodine procedure.
The reaction has earlier roots with the 1911 discovery by Wieland and Roseeu that the reaction of hydrazones with iodine alone (without base) results in the azine dimer (structure 2 in scheme 1).
In the original Barton publication the reaction was optimized by using a strong guanidine base, the inverse addition of the hydrazone to an iodine solution, and by exclusion of water.
When iodine as an electrophile is replaced by aromatic selenyl bromides, the corresponding vinyl selenides are obtained:
Reaction mechanism
The reaction mechanism proposed in the original Barton publication is outlined as follows:
The hydrazone is oxidized by iodine into a diazo intermediate. In the next step, iodine reacts as an electrophile; displacement of nitrogen then generates an iodocarbonium ion. When the reaction site is not sterically hindered, a second iodide can recombine to form the geminal di-iodide; otherwise an elimination reaction leads to the vinyliodide. When water is present, the reaction product can revert to the ketone.
This reaction is related to the Shapiro reaction.
Scope
An example of this procedure is the reaction of 2,2,6-trimethylcyclohexanone to the hydrazone by reaction with hydrazine and triethylamine in ethanol at reflux followed by reaction of the hydrazone with iodine in the presence of 2-tert-butyl-1,1,3,3-tetramethylguanidine (cheaper than DBU) in diethyl ether at room temperature. Another example can be found in the Danishefsky Taxol total synthesis.
In one study it is attempted to trap any reactive intermediate of this reaction with an internal alkene. When the hydrazone 1 in scheme 5 is reacted with iodine and triethylamine in toluene, the expected reaction product is not the di-iodide 10 through path B in a free radical mechanism. Reaction sequence starting from 1: halogen addition reaction to di-iodide intermediate 2 followed by elimination reaction with loss of Hydrogen iodide to 3. In path B another equivalent of iodine reacts to the azo double bond followed by loss of HI and formation of 6. The nitrogen to iodine bond is weak and homolysis gives the nitrogen free radical 7. Loss of nitrogen results in radical species 8. The radical position gets transferred to the alkene in 9 which later recombines with iodide to 10. Note that in absence of the alkene 8 would accept an iodide radical and the geminal di-iodide then loses HI to form the vinyl iodide. The actual process taking place is path A with elimination of HI to the diazo compound 4 followed by a diazoalkane 1,3-dipolar cycloaddition to the pyrazoline 5 in 85% yield.
References
See also
Shapiro reaction
Olefination reactions
Substitution reactions
Hydrazones | Hydrazone iodination | [
"Chemistry"
] | 705 | [
"Olefination reactions",
"Hydrazones",
"Functional groups",
"Organic reactions"
] |
3,452,853 | https://en.wikipedia.org/wiki/Agomelatine | Agomelatine, sold under the brand names Valdoxan and Thymanax, among others, is an atypical antidepressant most commonly used to treat major depressive disorder and generalized anxiety disorder. One review found that it is as effective as other antidepressants with similar discontinuation rates overall but fewer discontinuations due to side effects. Another review also found it was similarly effective to many other antidepressants.
Common side effects include headaches, nausea, and dizziness, which usually subside in the first few weeks, as well as liver problems – due to the potential effect on the liver, ongoing blood tests are recommended. Its use is not recommended in people with dementia, or who are under the age of 18 or over 75. There is tentative evidence that it may have fewer side effects than some other antidepressants. It acts by blocking certain serotonin receptors and activating melatonin receptors.
Agomelatine was approved for medical use in Europe in 2009 and Australia in 2010. Its use is not approved in the United States and efforts to get approval were ended in 2011. It was developed by the pharmaceutical company Servier.
Medical uses
Major depressive disorder
Agomelatine is used for the treatment of major depressive episodes in adults in Europe and Australia. Ten placebo controlled trials have been performed to investigate the short term efficacy of agomelatine in major depressive disorder. At the end of treatment, significant efficacy was demonstrated in six of the ten short-term double-blind placebo-controlled studies. Two were considered "failed" trials, as comparators of established efficacy failed to differentiate from placebo. Efficacy was also observed in more severely depressed patients in all positive placebo-controlled studies. The maintenance of antidepressant efficacy was demonstrated in a relapse prevention study. One meta-analysis found agomelatine to be as effective as standard antidepressants, with an effect size () of 0.24.
In 2018, a systematic review and network meta-analysis comparing the efficacy and acceptability of 21 antidepressant drugs showed agomelatine to be one of the most effective and one of only two medications found to be more tolerable than placebo.
A meta-analysis found that agomelatine is effective in treating severe depression. Its antidepressant effect is greater for more severe depression. In people with a greater baseline score (>30 on HAMD17 scale), the agomelatine-placebo difference was of 4.53 points. Controlled studies in humans have shown that agomelatine is at least as effective as the SSRI antidepressants paroxetine, sertraline, escitalopram, and fluoxetine in the treatment of major depression. A 2018 meta-study comparing 21 antidepressants found agomelatine was one of the more tolerable, yet effective antidepressants.
However, the body of research on agomelatine has been substantially affected by publication bias, prompting analyses which take into account both published and unpublished studies. These have confirmed that agomelatine is approximately as effective as more commonly used antidepressants (e.g. SSRIs), but some qualified this as "marginally clinically relevant", being only slightly above placebo. According to a 2013 review, agomelatine did not seem to provide an advantage in efficacy over other antidepressants for the acute-phase treatment of major depression.
Generalized anxiety disorder
Agomelatine is also approved for the treatment of generalized anxiety disorder in adults in Australia. It has been found more effective than placebo in the treatment of in a number of short-term double-blind placebo-controlled studies and in long term relapse prevention.
Use of agomelatine in GAD is off-label in Europe. Agomelatine has been evaluated in a number of other off-label indications besides GAD.
Use in special populations
It is not recommended in Europe or Australia for use in children and adolescents below 18 years of age due to a lack of data on safety and efficacy. However, a recent 12 week study first reported in September 2020, and published in 2022 showed greater efficacy vs. placebo for agomelatine 25 mg per day in youth age 7–17 years and an acceptable tolerability profile with similar efficacy to fluoxetine. Only limited data is available on use in elderly people ≥ 75 years old with major depressive episodes.
It is not recommended during pregnancy or breastfeeding.
Contraindications
Agomelatine is contraindicated in patients with kidney or liver impairment. According to information disclosed by Servier in 2012, guidelines for the follow-up of patients treated with Valdoxan have been modified in concert with the European Medicines Agency. As some patients may experience increased levels of liver enzymes in their blood during treatment with Valdoxan, doctors have to run laboratory tests to check that the liver is working properly at the initiation of the treatment and then periodically during treatment, and subsequently decide whether to pursue the treatment or not. No relevant modification in agomelatine pharmacokinetic parameters in patients with severe renal impairment has been observed. However, only limited clinical data on its use in depressed patients with severe or moderate renal impairment with major depressive episodes is available. Therefore, caution should be exercised when prescribing agomelatine to these patients.
Adverse effects
Agomelatine does not alter daytime vigilance and memory in healthy volunteers. In depressed patients, treatment with the drug increased slow-wave sleep without modification of REM (rapid eye movement) sleep amount or REM latency. Agomelatine also induced an advance of the time of sleep onset and of minimum heart rate. From the first week of treatment, onset of sleep and the quality of sleep were significantly improved without daytime clumsiness as assessed by patients.
Agomelatine appears to cause fewer sexual side effects and discontinuation effects than paroxetine.
Common (1–10% incidence) adverse effects include
Headache
Nausea
Dizziness
Somnolence
Diarrhea
Insomnia
Fatigue
Back pain
Abdominal pain
Anxiety
Increased ALAT and ASAT (liver enzymes), possibly > 3× the upper limit of normal
Hyperhidrosis (excess sweating that is not proportionate to the ambient temperature)
Constipation
Vomiting
Migraine
Uncommon (0.1–1%) adverse effects include
Paraesthesia (abnormal sensations [e.g. itching, burning, tingling, etc.] due to malfunctioning of the peripheral nerves)
Blurred vision
Eczema
Pruritus (itching)
Urticaria
Agitation
Irritability
Restlessness
Aggression
Nightmares
Abnormal dreams
Rare (0.01–0.1%) adverse effects include
Mania
Hypomania
Suicidal ideation
Suicidal behaviour
Hallucinations
Steatohepatitis
Increased GGT and/or alkaline phosphatase
Liver failure
Jaundice
Erythematous rash
Face oedema and angioedema
Weight gain or loss, which tends to be less significant than with SSRIs
Excepting effects on the liver, the above adverse effects were usually mild to moderate and occurred in the first two weeks of treatment, subsiding thereafter. A 2019 study found no difference in rates of acute liver injury between users of citalopram and agomelatine, though this rate could be decreased due to the precautionary liver enzyme monitoring in the European Union.
Dependence and withdrawal
No dosage tapering is needed on treatment discontinuation. Agomelatine has no abuse potential as measured in healthy volunteer studies.
Overdose
Agomelatine is expected to be relatively safe in overdose.
Interactions
Agomelatine is a substrate of CYP1A2, CYP2C9 and CYP2C19. Inhibitors of these enzymes, e.g. the SSRI antidepressant fluvoxamine, reduce its clearance and can lead to an increase in agomelatine exposure, and possibly serotonin syndrome
. There is also the potential for agomelatine to interact with alcohol to increase the risk of hepatotoxicity.
Pharmacology
Pharmacodynamics
Agomelatine acts as a highly potent and selective melatonin MT1 and MT2 receptor agonist (Ki = 0.1nM and 0.12nM, respectively) and also as a relatively weak serotonin 5-HT2B and 5-HT2C receptor antagonist (Ki = 660nM and 631nM, respectively; ~6,000-fold lower than for the melatonin receptors). It is a neutral antagonist rather than an inverse agonist of the serotonin 5-HT2C receptor. The drug has negligible affinity for the serotonin 5-HT2A receptor or for a variety of other targets.
By antagonizing the serotonin 5-HT2C receptor, agomelatine has been found to disinhibit and elevate norepinephrine and dopamine release in the frontal cortex in animals, although notably not in the striatum or nucleus accumbens. In contrast to agomelatine, other serotonin 5-HT2C receptor antagonists and inverse agonists, such as SB-242084 and SB-206553, have been found to increase dopamine levels in the nucleus accumbens. These differences may in part be related to constitutive activity of the serotonin 5-HT2C receptor and resulting differences between neutral antagonists and inverse agonists of the receptor. In addition, there are multiple isoforms of the serotonin 5-HT2C receptor with different properties. In other studies, while agomelatine alone did not affect the firing rates of ventral tegmental area (VTA) dopaminergic neurons, it abolished the inhibition of these neurons by the serotonin 5-HT2C receptor agonist Ro60-0175. Due to the increase in norepinephrine and dopamine levels in the frontal cortex with agomelatine, the drug has sometimes been referred to as a norepinephrine–dopamine disinhibitor (NDDI).
Although agomelatine is widely claimed to act as a serotonin 5-HT2C receptor antagonist, the clinical significance of this action has been disputed by some researchers. Unlike with other serotonin 5-HT2C receptor antagonists, therapeutic doses of agomelatine fail to acutely increase slow-wave sleep in humans. Additionally, no receptor occupancy studies of agomelatine have been conducted in humans to demonstrate significant occupancy of serotonin 5-HT2C receptors at therapeutic doses.
Agomelatine has shown an antidepressant-like effect in animal models of depression (learned helplessness test, behavioral despair test, chronic mild stress) as well as in models with circadian rhythm desynchronisation and in models related to stress and anxiety. Agomelatine has been found to resynchronize circadian rhythms in animal models of delayed sleep phase syndrome (DSPS). In humans, agomelatine has positive phase-shifting properties; it induces a phase advance of sleep, body temperature decline, and melatonin onset.
Pharmacokinetics
Agomelatine is well-absorbed with oral administration (≥80%), but it has very low oral bioavailability (~1%) due to extensive first-pass metabolism. The elimination half-life of agomelatine is 1 to 2hours. The half-life of agomelatine does not change with repeated administration. There is no accumulation of agomelatine with continuous administration.
Chemistry
Structure
The chemical structure of agomelatine is very similar to that of melatonin. Where melatonin has an indole ring system, agomelatine has a naphthalene bioisostere instead.
Synthesis
History
Agomelatine was discovered and developed by the European pharmaceutical company Servier Laboratories Ltd. Servier continued to develop the drug and conduct phase III trials in the European Union.
In March 2005, Servier submitted agomelatine to the European Medicines Agency (EMA) under the trade names Valdoxan and Thymanax. On 27 July 2006, the Committee for Medical Products for Human Use (CHMP) of the EMA recommended a refusal of the marketing authorisation. The major concern was that efficacy had not been sufficiently shown, while there were no special concerns about side effects. In September 2007, Servier submitted a new marketing application to the EMA.
In March 2006, Servier announced it had sold the rights to market agomelatine in the United States to Novartis. It was undergoing several phase III clinical trials in the US, and until October 2011 Novartis listed the drug as scheduled for submission to the FDA no earlier than 2012. However, the development for the US market was discontinued in October 2011, when the results from the last of those trials became available.
It received approval from the European Medicines Agency (EMA) for marketing in the European Union in February 2009 and approval from the Therapeutic Goods Administration (TGA) for marketing in Australia in August 2010.
Research
Circadian rhythm sleep disorders
Agomelatine has been investigated for its effects on sleep regulation due its actions as a melatonin receptor agonist. Studies report various improvements in general quality of sleep metrics, as well as benefits in circadian rhythm sleep disorders. However, research is very limited (e.g., case reports) and agomelatine is not approved for use in the treatment of sleep disorders.
Seasonal affective disorder
A 2019 Cochrane review suggested no recommendations of agomelatine in support of, or against, its use to treat individuals with seasonal affective disorder.
See also
SPT-320 (LYT-320) – an experimental agomelatine prodrug
References
External links
Genf interaction table- https://www.hug.ch/sites/interhug/files/structures/pharmacologie_et_toxicologie_cliniques/carte_cytochromes_2016_final.pdf
5-HT2B antagonists
5-HT2C antagonists
Acetamides
Antidepressants
Laboratoires Servier
Melatonin receptor agonists
Naphthol ethers
Phenethylamines
Wikipedia medicine articles ready to translate | Agomelatine | [
"Chemistry"
] | 3,028 | [
"Melatonin receptor agonists",
"Drug discovery"
] |
3,452,885 | https://en.wikipedia.org/wiki/Vinyl%20halide | In organic chemistry, a vinyl halide is a compound with the formula CH2=CHX (X = halide). The term vinyl is often used to describe any alkenyl group. For this reason, alkenyl halides with the formula RCH=CHX are sometimes called vinyl halides. From the perspective of applications, the dominant member of this class of compounds is vinyl chloride, which is produced on the scale of millions of tons per year as a precursor to polyvinyl chloride. Polyvinyl fluoride is another commercial product. Related compounds include vinylidene chloride and vinylidene fluoride.
Synthesis
Vinyl chloride is produced by dehydrochlorination of 1,2-dichloroethane.
Due to their high utility, many approaches to vinyl halides have been developed, such as:
reactions of vinyl organometallic species with halogens
Takai olefination
Stork-Zhao olefination with, e.g., (Chloromethylene)triphenylphosphorane - a modification of the Wittig reaction
Olefin metathesis
Reactions
Vinyl bromide and related alkenyl halides form the Grignard reagent and related organolithium reagents. Alkenyl halides undergo base elimination to give the corresponding alkyne. Most important is their use in cross-coupling reactions (e.g. Suzuki-Miyaura coupling, Stille coupling, Heck coupling, etc.).
See also
Vinyl iodide functional group
References
Organohalides | Vinyl halide | [
"Chemistry"
] | 324 | [
"Organic compounds",
"Functional groups",
"Organohalides"
] |
2,511,080 | https://en.wikipedia.org/wiki/Retarded%20time | In electromagnetism, an electromagnetic wave (light) in vacuum travels at a finite speed (the speed of light c). The retarded time is the propagation delay between emission and observation, since it takes time for information to travel between emitter and observer. This arises due to causality.
Retarded and advanced times
Retarded time tr or t′ is calculated with a "speed-distance-time" calculation for EM fields.
If the EM field is radiated at position vector r′ (within the source charge distribution), and an observer at position r measures the EM field at time t, the time delay for the field to travel from the charge distribution to the observer is |r − r′|/c. Subtracting this delay from the observer's time t then gives the time when the field began to propagate, i.e. the retarded time t′.
The retarded time is:
(which can be rearranged to
, showing how the positions and times of source and observer are causally linked).
A related concept is the advanced time ta, which takes the same mathematical form as above, but with a “+” instead of a “−”:
This is the time it takes for a field to propagate from originating at the present time t to a distance . Corresponding to retarded and advanced times are retarded and advanced potentials.
Retarded position
The retarded position can be obtained from the current position of a particle by subtracting the distance it has travelled in the lapse from the retarded time to the current time.
For an inertial particle, this position can be obtained by solving this equation:
,
where rc is the current position of the source charge distribution and v its velocity.
Application
Perhaps surprisingly - electromagnetic fields and forces acting on charges depend on their history, not their mutual separation. The calculation of the electromagnetic fields at a present time includes integrals of charge density ρ(r', tr) and current density J(r', tr'') using the retarded times and source positions. The quantity is prominent in electrodynamics, electromagnetic radiation theory, and in Wheeler–Feynman absorber theory, since the history of the charge distribution affects the fields at later times.
See also
Antenna measurement
Electromagnetic four-potential
Jefimenko's equations
Liénard–Wiechert potential
Light-time correction
References
Time
Electromagnetic radiation | Retarded time | [
"Physics",
"Mathematics"
] | 505 | [
"Physical phenomena",
"Physical quantities",
"Time",
"Electromagnetic radiation",
"Quantity",
"Radiation",
"Spacetime",
"Wikipedia categories named after physical quantities"
] |
2,511,424 | https://en.wikipedia.org/wiki/Vietoris%E2%80%93Begle%20mapping%20theorem | The Vietoris–Begle mapping theorem is a result in the mathematical field of algebraic topology. It is named for Leopold Vietoris and Edward G. Begle. The statement of the theorem, below, is as formulated by Stephen Smale.
Theorem
Let and be compact metric spaces, and let be surjective and continuous. Suppose that the fibers of are acyclic, so that
for all and all ,
with denoting the th reduced Vietoris homology group. Then, the induced homomorphism
is an isomorphism for and a surjection for .
Note that as stated the theorem doesn't hold for homology theories like singular homology. For example, Vietoris homology groups of the closed topologist's sine curve and of a segment are isomorphic (since the first projects onto the second with acyclic fibers). But the singular homology differs, since the segment is path connected and the topologist's sine curve is not.
References
"Leopold Vietoris (1891–2002)", Notices of the American Mathematical Society, vol. 49, no. 10 (November 2002) by Heinrich Reitberger
Theorems in algebraic topology | Vietoris–Begle mapping theorem | [
"Mathematics"
] | 241 | [
"Topology stubs",
"Topology",
"Theorems in algebraic topology",
"Theorems in topology"
] |
2,511,505 | https://en.wikipedia.org/wiki/Message%20in%20a%20bottle | A message in a bottle (abbrev. MIB) is a form of communication in which a message is sealed in a container (typically a bottle) and released into a conveyance medium (typically a body of water).
Messages in bottles have been used to send distress messages; in crowdsourced scientific studies of ocean currents; as memorial tributes; to send deceased loved ones' ashes on a final journey; to convey expedition reports; and to carry letters or reports from those believing themselves to be doomed. Invitations to prospective pen pals and letters to actual or imagined love interests have also been sent as messages in bottles.
The lore surrounding messages in bottles has often been of a romantic or poetic nature.
Use of the term "message in a bottle" has expanded to include metaphorical uses or uses beyond its traditional meaning as bottled messages released into oceans. The term has been applied to plaques on craft launched into outer space, interstellar radio messages, stationary time capsules, balloon mail, and containers storing medical information for use by emergency medical personnel.
With a growing awareness that bottles constitute waste that can harm the environment and marine life, environmentalists tend to favor biodegradable drift cards and wooden blocks.
History and uses
Bottled messages may date to about 310 B.C., in water current studies reputed to have been carried out by Greek philosopher Theophrastus. The Japanese medieval epic The Tale of the Heike records the story of an exiled poet who, in about 1177 A.D., launched wooden planks on which he had inscribed poems describing his plight. In the sixteenth century, Queen Elizabeth I reputedly created an official position of "Uncorker of Ocean Bottles", and—thinking some bottles might contain secrets from British spies or fleets—decreed that anyone else opening the bottles could face the death penalty. (However, it has been argued that this is a myth.) In the nineteenth century, literary works such as Edgar Allan Poe's 1833 "MS. Found in a Bottle" and Charles Dickens' 1860 "A Message from the Sea" inspired an enduring popular passion for sending bottled messages.
Scientific experiments involving drift objects—more generally called determinate drifters—provide information about currents and help researchers develop ocean circulation maps. For example, experiments conducted in the mid-1700s by Benjamin Franklin and others indicated the existence and approximate location of the Gulf Stream, with scientific confirmation following in the mid-1800s. Using a network of beachcomber informants, rear admiral Alexander Becher is believed to be the first (from 1808–1852) to study travel of so-called "bottle papers" around an ocean gyre (a large circulating current system). In the late 1800s, Albert I, Prince of Monaco determined that the Gulf Stream branched into the North Atlantic Drift and the Azores Current. In the 1890s, Scottish scientist T. Wemyss Fulton released floating bottles and wooden slips to chart North Sea surface currents for the first time. Releasing bottles designed to remain a short distance above the sea bed, British marine biologist George Parker Bidder III first proved in the early twentieth century that deep sea currents flowed from east to west in the North Sea and that bottom feeders prefer to move against the current.
The United States Coast and Geodetic Survey (USC&GS) used drift bottles from 1846 to 1966. More recently, technologies involving satellite tags, fixed current profilers and satellite communication have permitted more efficient analysis of ocean currents: at any given time, thousands of modern "drifters" transmit current position, temperature, velocity, etc., to satellites, thus avoiding conventional drift bottles' dependence on serendipitous finds and cooperation by conscientious citizens.
Drift bottle studies have provided a simple way to learn about non-tidal movement of waters containing eggs and larvae of commercially important fishes, for sharing among fisheries scientists and oceanographers. Such experiments simulate the travel of pollutants such as oil spills, study formation of ocean gyre garbage patches such as the Great Pacific Garbage Patch, and suggest travel paths of invasive species. Persistent currents are detected to allow ships to ride favorable currents and avoid opposing currents. Projected travel paths of navigation hazards, such as naval mines, advise safer shipping routes. Even in inland waterways, drifters wirelessly deliver real-time data on water quality, GPS location, and water velocity, for early warning against flash floods, measuring pollution run-off, and monitoring algal blooms.
Outside science, people have launched bottled messages to find pen pals, "bottle preachers" have sent "sermon bottles", propaganda-bearing bottles have been directed at foreign shores, and survivors have sent poetic loving tributes to departed loved ones or sent their cremated remains (ashes) on a final journey.
It was estimated in 2009 that since the mid-1900s, six million bottled messages had been released, including 500,000 from oceanographers.
Bottle design and recovery rates
Some bottles are ballasted with dry sand so that they float vertically at or near the ocean surface, and are less influenced by winds and breaking waves than other bottles that are purposely not ballasted. Wooden blocks float higher in the water and thus are more influenced by wind—a design specially suited for simulating travel paths of plastic waste that is less dense than glass containers.
An early-20th-century "bottom" (or seabed) drift bottle design by George Parker Bidder III involved weighting a bottle with a long copper wire that causes it to sink until the wire trails upon the sea bottom, at which time the bottle tends to remain a few inches above the bottom to be moved by the bottom current. A mushroom-shaped seabed drifter design has also been used. Seabed drifters are designed to be scooped up by a trawler or wash up on shore.
Water pressure pressing on the cork or other closure was thought to keep a bottle better sealed; some designs included a wooden stick to stop the cork from imploding. Vessels of less scientific designs have survived for extended periods, including a baby food bottle a ginger beer bottle, and a 7-Up bottle.
A low percentage of bottles—thought by some to be less than 3 percent—are actually recovered, so they are released in large numbers, sometimes in the thousands. Reported recovery rates for large-scale scientific studies vary based on the ocean of release, and range from 11 percent (Woods Hole, 156,276 bottles from 1948 to 1962, Atlantic), to 10 percent (Woods Hole, 165,566 bottles from 1960 to 1970, Atlantic), to 3.4 percent (Scripps Institution, 148,384 bottles from 1954 to 1971, Pacific). Oceanographic drift card recovery rates have ranged from 50 percent if released in densely populated areas (North Sea, Puget Sound) to 1 percent in uninhabited areas (Antarctica). Recovery rates decrease as bottles are released further from shore, with oceanographer Curtis Ebbesmeyer developing a rule of thumb that bottles released more than 100 miles from shore have recovery rates below 10 percent, and "only a few percent" of those released more than 1,000 miles from shore are recovered. About 90 percent of marine debris washes up on less than 10 percent of the world's coastlines, favoring beaches perpendicular to the dominant ocean current. Objects with similar buoyancy characteristics tend to collect together.
A Scripps scientist said that marine organisms grow on the bottles, causing them to sink within eight to ten months unless washed ashore earlier. An unknown number are found but not reported.
Time and distance
Some drift bottles were not found for more than a century after being launched.
Floating objects may ride gyres (large circulating current systems) that are present in each ocean, and may be transferred from one ocean's gyre to another's. Further, objects may be sidetracked by wind, storms, countercurrents, and ocean current variation. Accordingly, drift bottles have traveled large distances, with drifts of 4,000 to 6,000 miles and more—sometimes traveling 100 miles per day—not uncommon. Bottles have traveled from the Beaufort Sea above northern Alaska and northwestern Canada to northern Europe; from Antarctica to Tasmania; from Mexico to the Philippines; from Canada's Labrador Sea and Baffin Bay to Irish, French, Scottish, and Norwegian beaches; from the Galapagos Islands to Australia; and from New Zealand to Spain (practically antipodes). Based on empirical data collected since 1901, a computer program called OSCURS (Ocean Surface Current Simulator) digitally simulates motion and timing of floating objects in and between ocean gyres.
Despite being launched substantial time periods before being found, some bottles have been found physically close to their original launch points, such as a message launched by two girls in 1915 and found in 2012 near Harsens Island, Michigan, U.S., and a ten-year-old girl's message launched into the Indian River Bay in Delaware, U.S. in 1971 and found in adjacent Delaware Seashore State Park in 2016.
Historical examples
Historical examples are listed in chronological order, based on year of recovery (when applicable):
Early examples
It is reputed that about 310 BC, Aristotle's protégé Greek philosopher Theophrastus used bottled messages to determine if the Mediterranean Sea was formed by the inflowing Atlantic Ocean.
When Christopher Columbus encountered a severe storm while returning from America, he is said to have written on parchment what he had found in the New World and requested it be forwarded to King Ferdinand and Queen Isabella, enclosed the parchment in a waxed cloth and placed it into a large wooden barrel to be cast into the sea. The communication was never found.
On April 15, 1841, the Wellington, W.C. Kendrick, Commander, bound "from Madras and Cape bound to London", launched a bottled message in the mid-Atlantic (at 13° N) "for the purpose of throwing some light on the ocean currents".
In 1847, from the brig Eagle laden with corn for the starving Irish in Waterford, Ireland, master Gregg dropped a bottled message with his location (42.40N, 54.10W) on March 27, requesting the find be sent to the Nautical Magazine (London) for publication to provide information on Atlantic currents. The bottle was retrieved on July 20 by Capt. Robert Oke on the revenue cutter Caledonia off the coast of Newfoundland (46.36N, 55.30W).
In 1856, a bottle was found on the Hebrides coast, Scotland, containing a note stating a ship, believed to be the SS Pacific, had sunk after a collision with an iceberg.
In February 1862, the Bashford Hall "sent afloat a message in a bottle describing her perilous state." However, she arrived safely at Falmouth, England on March 6, 1862.
After the January 11, 1866 sinking of the SS London in the Bay of Biscay, bottled messages—reported as "farewell messages from passengers... to friends and relatives in England"—were reportedly found in months following.
In 1875, ship's steward Van Hoydek and cabin boy Henry Trusillo of the British sailing ship Lennie released 24 bottled messages into the Bay of Biscay, telling of the murder by mutineers of their captain and officers. French authorities soon received the message, rescued Hoydek and Trusillo, and brought the mutineers to justice.
In 1876, on the remote Scottish island of St Kilda, freelance journalist John Sands and marooned Austrian sailors deployed two messages requesting the Austrian Consul rescue them with provisions. The messages, each enclosed in a cocoa tin attached to a sheep's bladder for flotation in an arrangement later called a "St. Kilda mail boat", were discovered in Orkney within nine days and in Ross-Shire after 22 days. Since that time, sending "St. Kilda mail" has become a recreational ritual for island visitors, the containers often riding the Gulf Stream to the British mainland, Shetland, Orkney and Scandinavia.
20th century
Message-bearing bottles from Titanic (1912) and Lusitania (1915) have been widely recounted as fact, but even before these bottles were found The Irish News stated in April 1912 that "very many" such stories turn out to be "cruel hoaxes".
In February 1916, when German Zeppelin L 19 experienced unfavorable weather, battle damage and multiple engine failure after attacking the British Midlands, its commander's last message to superiors and the crew's final letters to relatives were released into the North Sea to be found on a Swedish coast six months later. The written descriptions of how a British fishing trawler had refused to rescue the downed Zeppelin's crew—the trawler captain claiming he feared the German airmen would overpower his own unarmed crew—contributed to an enduring international controversy.
On December 23, 1927, Frances Wilson Grayson, niece of U.S. President Woodrow Wilson, was to attempt to be the first woman to make a transatlantic flight (non-solo). However, her Sikorsky amphibian plane disappeared en route from New York's Long Island to Harbour Grace, Newfoundland, and was never found. A bottled message was found in Salem Harbor, Massachusetts, in January 1929, the unauthenticated message reading, "1928, we are freezing. Gas leaked out. We are drifting off Grand Banks. Grayson."
In December 1928, a trapper working at the mouth of the Agawa River, Ontario, found a bottled note from Alice Bettridge, an assistant stewardess in her early twenties who initially survived the December 1927 sinking in a blizzard of the freighter Kamloops and, before she herself perished, wrote "I am the last one left alive, freezing and starving to death on Isle Royale in Lake Superior. I just want mom and dad to know my fate."
In 1929, a bottle that came to be known as the Flying Dutchman was released by a German marine science expedition with instructions for any finders to report the find but return the bottle to the sea. Found at several locations in succession, the Flying Dutchman traveled 16,000 miles from its release point in the southern Indian Ocean, to Cape Horn in South America, and back through the Indian Ocean to its last reported find in 1935 on the west coast of Australia.
On the night of March 28, 1941 in the last moments of the Battle of Cape Matapan, aboard the sinking cruiser Fiume, Italian sailor Francesco Chirico wrote a farewell message and threw it overboard in a bottle. Chirico's message, including a note, "Please give news to my dear mother that I die for the homeland...", was found in 1952 near Villasimius, Sardinia.
On January 7, 1943, a Schweppes lemonade bottle was found near Woolnorth in northwestern Tasmania, containing a penciled message thrown overboard on April 17, 1916, by Australian soldier John Oppy as his troop ship passed between Encounter Bay and Kangaroo Island, South Australia. Oppy himself survived to see the message returned.
On Christmas Day 1945, 21-year-old medical corpsman Frank Hayostek threw a message-laden aspirin bottle from his Liberty ship as it approached New York, the bottle being found eight months later near Dingle, County Kerry, by Irish milkmaid Breda O'Sullivan. Her mailed reply began a correspondence that inspired Hayostek to save money for airfare to visit O'Sullivan in 1952. Intense media attention for the "impossibly romantic story", including Time magazine stories, overshadowed their two-week visit, the two parting but corresponding until they married other people in 1958 and 1959. Media attention endured through the sixtieth anniversary of their meeting, 2–3 years after their deaths.
In 1955, a bottle from a 1903 German Antarctic expedition was found in New Zealand, about 3400 miles from its launch point between the Kerguelen Islands and Tasmania; however, hydrographers surmise it had drifted around the world many times.
In 1956, Swedish sailor Åke Viking sent a bottled message "To Someone Beautiful and Far Away" that reached a 17-year-old Sicilian girl named Paolina, sparking a correspondence that culminated in their marriage in 1958. The affair attracted so much attention that 4,000 people celebrated their wedding.
In 1959 Guinness Brewery launched 150,000 bottles into the Atlantic Ocean and Caribbean Sea in a promotional campaign. It was reported that Inuit hunters on Coats Island, in Canada's Hudson Bay, found 80 of the bottles.
In 1969, a Canadian scientific expedition dropped a message bottle through a hole in the drift ice at the approximate North Pole. The bottle was found in 1972 in northeast Iceland.
In May 1976, National Geographic World magazine released 1,000 bottles—250 per week—from the cruise ship Song of Norway, with instructions in five languages to fill out and return cards, in order to help map ocean currents.
In 1978, a Russian researcher discovered a bottled message in the Franz Josef Land, north of mainland Russia, that was deposited by Karl Weyprecht, leader of the 1872–1874 Austro-Hungarian North Pole Expedition which sought a Northeast Passage.
A message that an American couple released from a cruise ship approaching Hawaii in 1979 was found off Songkhla Beach, Thailand by a former South Vietnamese soldier and his family as they fled that country's communist regime by boat. A correspondence relationship began in 1983, and the couple worked with U.S. Immigration to help the Vietnamese family obtain refugee status in 1985 and move to the U.S.
In 1991, a bottled message found on Vancouver Island, Canada, urged the release of Chinese dissident Wei Jingsheng. According to oceanographer Curtis Ebbesmeyer, the bottle was likely released in 1980 near Quemoy Island, one of many Taiwan propaganda bottles launched toward mainland China.
In what was described as "perhaps the most famous message in a bottle love story", in March 1999 a green ginger beer bottle was dredged up by a fisherman off the Essex coast, the bottle containing an 84-year-old letter tossed into the English Channel on September 9, 1914, by British soldier Private Thomas Hughes days before he was killed in fighting in France. Hughes' letter, written for delivery to his wife who had died in 1979, was delivered instead to his then 86-year-old daughter in New Zealand by the fisherman himself, who with his own wife was flown to New Zealand at the expense of New Zealand Post.
21st century
A teardrop-shaped bottle was found in March 2002 on a beach in Kent, England, containing an unsigned letter from a French woman expressing her enduring grief over the death of her son at age 13. British author Karen Liebreich spent years of research, unsuccessfully trying to find the mother and eventually publishing a book called The Letter in the Bottle (2006). The book was published in French in 2009, sparking huge media coverage that alerted the mother for the first time that her letter had actually been discovered. Saying she initially felt violated by publication of her personal suffering, on condition of continued anonymity, she agreed to tell Liebreich the details of her son's 1981 death in a bicycle accident, her decades of suffering afterwards, and the story surrounding release of her letter from an English Channel ferry.
In May 2005, three days after eighty-eight migrants were abandoned by human smugglers on a disabled boat, the migrants tied an SOS-bearing bottle to a long line of a passing fishing vessel, whose captain alerted authorities to rescue the migrants.
On December 10, 2006, a bottom drift bottle, released on April 25, 1914, northeast of the Shetland Islands by the Marine Laboratory, Aberdeen, U.K., was recovered by a Shetland fisherman, after the bottle had spent over 92 years at sea.
In October 2011 in waters off Somalia, the crew of the pirated cargo ship Montecristo used a bottle with a flashing beacon to alert NATO ships that they had retreated to an armored room, permitting a military rescue operation to proceed with knowledge that the crew was not being held hostage.
In April 2012 a fisherman recovered a bottom drift bottle that had been released 98 years earlier, on June 10, 1914, one of 1,890 released by the Glasgow School of Navigation to test undercurrents in the seas around Scotland. The 2012 find occurred east of Shetland by the Copious, the same fishing vessel involved in the 2006 find.
In a 2013 promotional campaign, Norwegian soft drink company Solo released a 26-foot, 2.7-ton replica soda bottle outfitted with a customized camera, navigation lights, an automatic identification system, a radar reflector, and GPS tracking technology, all powered by solar panels. The craft drifted from Tenerife, Canary Islands, while broadcasting its location, but its electronics were stolen by pirates before its five-month trip terminated at Los Roques archipelago near Venezuela.
In April 2013, a kite-surfer near the mouth of Croatia's Neretva River recovered a bottle containing a message purporting to have been sent in 1985 from Nova Scotia to fulfill a promise by a "Jonathon" to write to one "Mary". The message received international media attention.
In March 2014, a fisherman on the Baltic Sea near Kiel recovered a drift bottle containing a Danish postcard dated May 17, 1913, and signed by a then-20-year-old baker's son named Richard Platz, who asked for it to be delivered to his Berlin address. Researchers located Platz's granddaughter, by then 62, and delivered the 101-year-old message to her, Platz himself having died in 1946.
An April 2015 find on the North Sea island of Amrum, Germany, of a 108-year-old bottle sent by the Marine Biological Association of the United Kingdom in Plymouth, was one of 1,020 released into the North Sea between 1904 and 1906 by former association president George Parker Bidder III.
In 2016, Cuban migrants who had fled Cuba in a homemade boat, launched a bottled SOS message complaining of their treatment while being detained for 42 days aboard a United States Coast Guard Cutter.
In late 2016, a barnacle-encrusted, kelp-tangled GoPro video camera was recovered, the camera's memory card preserving footage showing the prelude to the camera's being swept overboard four years earlier, and also recording its first two hours underwater off Fingal Bay, Australia.
In 2017, a small unmanned boat made by high school students and having solar panels, sensors and camera, drifted on an unexpected path from near Maine, to approach Spain and Portugal, then drift westward back into the Atlantic and northward to be discovered in Benbecula in the western isles of Scotland. The boat had a waterproof pod containing a chip that collected sensor data.
In July 2017, a Scottish widower seeking female companionship set 2,000 bottled messages adrift at various locations around the U.K., and though claiming he received responses from 50 women, ceased the practice in response to public complaints and an investigation by the Scottish Environment Protection Agency.
In January 2018, a couple walking on a beach in Western Australia discovered a bottled message that had been launched on June 12, 1886, from the German barque conducting drift bottle experiments for the German Naval Observatory. The message's authenticity was corroborated through the ship captain's original Meteorological Journal, and, at 131 years' duration, eclipsed the previous corroborated record duration of 108 years. The bottle's thick glass and its opening's narrow bore are thought to have protected the paper from the elements.
In the summer of 2018, a bottled typewritten message dated March 26, 1930, was discovered in the roof of the twelfth-century Goslar Cathedral in Goslar, Germany, signed by four roofers who bemoaned the economic state of that country. The bottle was discovered by a roofer who was the grandson of one of the signatories, who had been an 18-year-old roofing apprentice in 1930. Goslar's mayor replaced the bottle with a copy of the 1930 message, adding his own confidential message.
In May 2019, a Gatorade bottle with a four-page letter, written in Spanish, was found in Brown Bay, near Mount Gambier, South Australia. The letter had been sent from Caleta Córdoba, near Comodoro Rivadavia, Argentina by a mother and two children as a loving tribute to their husband and father who had died of a stroke a year earlier.
In June 2019, three hikers trapped above a waterfall on California's Arroyo Seco tributary released a Nalgene bottled SOS message that was quickly discovered a quarter mile (0.4 km) downstream, allowing them to be rescued by helicopter the following morning.
In late 2019, a bottled message launched on August 1, 1994, by 12-year-old Ryan Mead was found near the mouth of the Taramakau River, New Zealand, the find occurring mere months after Mead died at age 37 in a freak accident inhaling fumes while laying carpet.
Long-duration events
Table listing long-duration (>25-year) events involving messages in bottles (scroll):
s denotes stationary messages (placed on land, not in a body of water).
Popular perceptions
Besides interest in citizen science drift-bottle experiments, message-in-a-bottle lore has often been of a romantic or poetic nature. Such messages have been romanticized in literature, from Edgar Allan Poe's 1833 story "MS. Found in a Bottle" through Nicholas Sparks' 1998 Message in a Bottle. Clint Buffington, subject of the 2019 documentary short film The Tides That Bind / A Message in a Bottle Story, surmised in an interview with The Guardian that sending a bottled message expresses a hope to find connection in a fear-filled world. In Newsweek Ryan Bort recounted various historical messages as being cries for help, or "final, poetic words of resignation left behind for (an) indifferent sea", or from "lonely, lovelorn souls, searching for serendipity", or a search for "affirmation ... that comes from somewhere other than yourself". Bort described sending a message in a bottle as a romantic act that has "such a delicious potential for magic" or as "surrendering a part of yourself to something larger", concluding that "every message in a bottle is a prayer".
Finding a bottled message has generally been viewed positively, the finder of a 98-year-old message referring to his find as winning the lottery. However, intense media attention over a personal relationship that resulted from one woman's find, is said to have caused her to remark that had she known what would happen, she would have left the bottle on the beach. Another woman said she initially felt shocked and violated by publication of the personal suffering she had expressed in a bottled letter that she never expected would be found or read.
Similar methods using other media
The term "message in a bottle" has been applied to techniques of communication that do not literally involve a bottle or a water-based method of conveyance, such as the Europa Clipper plaque (2024), the Pioneer plaque (1972, 1973), the Voyager Golden Record (1977), and even radio-borne messages (see Cosmic Call, Teen Age Message, A Message from Earth), all directed into space.
Balloon mail involves sending undirected messages through the air rather than into bodies of water. For example, during the Prussian siege of Paris in 1870, about 2.5 million letters were sent by hot air balloon, the only way Parisians' letters could reach the rest of France.
Stationary time capsules have been termed "messages in a bottle", such as a 1935 message in a lemonade bottle correctly portending difficult times, which was found in 2016 by masons restoring damaged Portland stone at Southampton Guildhall. A geologist left a bottled message in 1959 in a cairn on isolated Ward Hunt Island (Canada, 83°N latitude), allowing its finders in 2013 to determine that a nearby glacier had retreated over 200 feet in the intervening 54 years. More durable examples of time capsules are the Westinghouse Time Capsules of the 1939 and 1964 New York World's Fairs, intended to be opened 5,000 years after their creation.
Prisoners from the Auschwitz concentration camp concealed bottles containing sketches and writings that were found after World War II.
Certain emergency medical services urge patients to record information describing their medical conditions, medications and drug allergies, emergency contacts, as well as advance healthcare directives for when the patients are incapacitated or suffer from dementia or learning difficulties, and place the record as a special "message in a bottle" stored in (conventionally) a refrigerator, where paramedics can quickly locate it.
Environmental issues
Plastic bottles are known to constitute plastic marine pollution, and eventually break down into smaller pieces because of ultraviolet light, salt degradation or wave action. Glass bottles can break into sharp-edged pieces, and bottle caps are ingested by sea birds.
Some agencies continue to use drift bottles into the 21st century, but with increased awareness that man-made floating items can harm marine life or constitute waste material, biodegradable drift cards and biodegradable wooden drifters with non-toxic ink are gaining favor.
See also
Beachcombing
Drifter (oceanography)
Earth's black box
Flotsam, jetsam, lagan and derelict
Friendly Floatees, plastic bath toys accidentally released in the Pacific in 1992
Ice rafting
Swallow float
Notes
References
Publications
Emergency communication
History of communication
Interstellar messages
Letters (message)
Maritime communication
Physical oceanography | Message in a bottle | [
"Physics"
] | 6,064 | [
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
2,512,761 | https://en.wikipedia.org/wiki/Coherence%20%28statistics%29 | In probability theory and statistics, coherence can have several different meanings. Coherence in statistics is an indication of the quality of the information, either within a single data set, or between similar but not identical data sets. Fully coherent data are logically consistent and can be reliably combined for analysis.
In probability
When dealing with personal probability assessments, or supposed probabilities derived in nonstandard ways, it is a property of self-consistency across a whole set of such assessments.
In gambling strategy
One way of expressing such self-consistency is in terms of responses to various betting propositions, as described in relation to coherence (philosophical gambling strategy).
In Bayesian decision theory
The coherency principle in Bayesian decision theory is the assumption that subjective probabilities follow the ordinary rules/axioms of probability calculations (where the validity of these rules corresponds to the self-consistency just referred to) and thus that consistent decisions can be obtained from these probabilities.
In time series analysis
In time series analysis, and particularly in spectral analysis, it is used to describe the strength of association between two series where the possible dependence between the two series is not limited to simultaneous values but may include leading, lagged and smoothed relationships.
The concepts here are sometimes known as coherency and are essentially those set out for coherence as for signal processing. However, note that the quantity coefficient of coherence may sometimes be called the squared coherence.
References
Probability assessment
Bayesian statistics
Frequency-domain analysis
Statistical principles
de:Kohärenz (Signalanalyse) | Coherence (statistics) | [
"Physics"
] | 323 | [
"Frequency-domain analysis",
"Spectrum (physical sciences)"
] |
2,514,209 | https://en.wikipedia.org/wiki/Nucleoprotein | Nucleoproteins are proteins conjugated with nucleic acids (either DNA or RNA). Typical nucleoproteins include ribosomes, nucleosomes and viral nucleocapsid proteins.
Structures
Nucleoproteins tend to be positively charged, facilitating interaction with the negatively charged nucleic acid chains. The tertiary structures and biological functions of many nucleoproteins are understood. Important techniques for determining the structures of nucleoproteins include X-ray diffraction, nuclear magnetic resonance and cryo-electron microscopy.
Viruses
Virus genomes (either DNA or RNA) are extremely tightly packed into the viral capsid. Many viruses are therefore little more than an organised collection of nucleoproteins with their binding sites pointing inwards. Structurally characterised viral nucleoproteins include influenza, rabies, Ebola, Bunyamwera, Schmallenberg, Hazara, Crimean-Congo hemorrhagic fever, and Lassa.
Deoxyribonucleoproteins
A deoxyribonucleoprotein (DNP) is a complex of DNA and protein. The prototypical examples are nucleosomes, complexes in which genomic DNA is wrapped around clusters of eight histone proteins in eukaryotic cell nuclei to form chromatin. Protamines replace histones during spermatogenesis.
Functions
The most widespread deoxyribonucleoproteins are nucleosomes, in which the component is nuclear DNA. The proteins combined with DNA are histones and protamines; the resulting nucleoproteins are located in chromosomes. Thus, the entire chromosome, i.e. chromatin in eukaryotes consists of such nucleoproteins.
In eukaryotic cells, DNA is associated with about an equal mass of histone proteins in a highly condensed nucleoprotein complex called chromatin. Deoxyribonucleoproteins in this kind of complex interact to generate a multiprotein regulatory complex in which the intervening DNA is looped or wound. The deoxyribonucleoproteins participate in regulating DNA replication and transcription.
Deoxyribonucleoproteins are also involved in homologous recombination, a process for repairing DNA that appears to be nearly universal. A central intermediate step in this process is the interaction of multiple copies of a recombinase protein with single-stranded DNA to form a DNP filament. Recombinases employed in this process are produced by archaea (RadA recombinase), by bacteria (RecA recombinase) and by eukaryotes from yeast to humans (Rad51 and Dmc1 recombinases).
Ribonucleoproteins
A ribonucleoprotein (RNP) is a complex of ribonucleic acid and RNA-binding protein. These complexes play an integral part in a number of important biological functions that include transcription, translation and regulating gene expression and regulating the metabolism of RNA. A few examples of RNPs include the ribosome, the enzyme telomerase, vault ribonucleoproteins, RNase P, hnRNP and small nuclear RNPs (snRNPs), which have been implicated in pre-mRNA splicing (spliceosome) and are among the main components of the nucleolus. Some viruses are simple ribonucleoproteins, containing only one molecule of RNA and a number of identical protein molecules. Others are ribonucleoprotein or deoxyribonucleoprotein complexes containing a number of different proteins, and exceptionally more nucleic acid molecules. Currently, over 2000 RNPs can be found in the RCSB Protein Data Bank (PDB). Furthermore, the Protein-RNA Interface Data Base (PRIDB) possesses a collection of information on RNA-protein interfaces based on data drawn from the PDB. Some common features of protein-RNA interfaces were deduced based on known structures. For example, RNP in snRNPs have an RNA-binding motif in its RNA-binding protein. Aromatic amino acid residues in this motif result in stacking interactions with RNA. Lysine residues in the helical portion of RNA-binding proteins help to stabilize interactions with nucleic acids. This nucleic acid binding is strengthened by electrostatic attraction between the positive lysine side chains and the negative nucleic acid phosphate backbones. Additionally, it is possible to model RNPs computationally. Although computational methods of deducing RNP structures are less accurate than experimental methods, they provide a rough model of the structure which allows for predictions of the identity of significant amino acids and nucleotide residues. Such information helps in understanding the overall function the RNP.'RNP' can also refer to ribonucleoprotein particles. Ribonucleoprotein particles are distinct intracellular foci for post-transcriptional regulation. These particles play an important role in influenza A virus replication. The influenza viral genome is composed of eight ribonucleoprotein particles formed by a complex of negative-sense RNA bound to a viral nucleoprotein. Each RNP carries with it an RNA polymerase complex. When the nucleoprotein binds to the viral RNA, it is able to expose the nucleotide bases which allow the viral polymerase to transcribe RNA. At this point, once the virus enters a host cell it will be prepared to begin the process of replication.
Anti-RNP antibodies
Anti-RNP antibodies are autoantibodies associated with mixed connective tissue disease and are also detected in nearly 40% of Lupus erythematosus patients. Two types of anti-RNP antibodies are closely related to Sjögren's syndrome: SS-A (Ro) and SS-B (La). Autoantibodies against snRNP are called Anti-Smith antibodies and are specific for SLE. The presence of a significant level of anti-U1-RNP also serves a possible indicator of MCTD when detected in conjunction with several other factors.
Functions
The ribonucleoproteins play a role of protection. mRNAs never occur as free RNA molecules in the cell. They always associate with ribonucleoproteins and function as ribonucleoprotein complexes.
In the same way, the genomes of negative-strand RNA viruses never exist as free RNA molecule. The ribonucleoproteins protect their genomes from RNase. Nucleoproteins are often the major antigens for viruses because they have strain-specific and group-specific antigenic determinants.
See also
DNA-binding protein
RNA-binding protein
References
External links
PRIDB Protein-RNA Interface Database
Proteins | Nucleoprotein | [
"Chemistry"
] | 1,446 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
2,514,694 | https://en.wikipedia.org/wiki/Universal%20Transverse%20Mercator%20coordinate%20system | The Universal Transverse Mercator (UTM) is a map projection system for assigning coordinates to locations on the surface of the Earth. Like the traditional method of latitude and longitude, it is a horizontal position representation, which means it ignores altitude and treats the earth surface as a perfect ellipsoid. However, it differs from global latitude/longitude in that it divides earth into 60 zones and projects each to the plane as a basis for its coordinates. Specifying a location means specifying the zone and the x, y coordinate in that plane. The projection from spheroid to a UTM zone is some parameterization of the transverse Mercator projection. The parameters vary by nation or region or mapping system.
Most zones in UTM span 6 degrees of longitude, and each has a designated central meridian. The scale factor at the central meridian is specified to be 0.9996 of true scale for most UTM systems in use.
History
The National Oceanic and Atmospheric Administration (NOAA) website states that the system was developed by the United States Army Corps of Engineers, starting in the early 1940s. However, a series of aerial photos found in the Bundesarchiv-Militärarchiv (the military section of the German Federal Archives) apparently dating from 1943–1944 bear the inscription UTMREF followed by grid letters and digits, and projected according to the transverse Mercator, a finding that would indicate that something called the UTM Reference system was developed in the 1942–43 time frame by the Wehrmacht. It was probably carried out by the Abteilung für Luftbildwesen (Department for Aerial Photography). From 1947 onward the US Army employed a very similar system, but with the now-standard 0.9996 scale factor at the central meridian as opposed to the German 1.0. For areas within the contiguous United States the Clarke Ellipsoid of 1866 was used. For the remaining areas of Earth, including Hawaii, the International Ellipsoid was used. The World Geodetic System WGS84 ellipsoid is now generally used to model the Earth in the UTM coordinate system, which means current UTM northing at a given point can differ up to 200 meters from the old. For different geographic regions, other datum systems can be used.
Prior to the development of the Universal Transverse Mercator coordinate system, several European nations demonstrated the utility of grid-based conformal maps by mapping their territory during the interwar period. Calculating the distance between two points on these maps could be performed more easily in the field (using the Pythagorean theorem) than was possible using the trigonometric formulas required under the graticule-based system of latitude and longitude. In the post-war years, these concepts were extended into the Universal Transverse Mercator/Universal Polar Stereographic (UTM/UPS) coordinate system, which is a global (or universal) system of grid-based maps.
The transverse Mercator projection is a variant of the Mercator projection, which was originally developed by the Flemish geographer and cartographer Gerardus Mercator, in 1570. This projection is conformal, which means it preserves angles and therefore shapes across small regions. However, it distorts distance and area.
Definitions
UTM zone
The UTM system divides the Earth into 60 zones, each 6° of longitude in width. Zone 1 covers longitude 180° to 174° W; zone numbering increases eastward to zone 60, which covers longitude 174°E to 180°. The polar regions south of 80°S and north of 84°N are excluded, and instead covered by the universal polar stereographic (UPS) coordinate system.
Each of the 60 zones uses a transverse Mercator projection that can map a region of large north-south extent with low distortion. By using narrow zones of 6° of longitude (up to 668 km) in width, and reducing the scale factor along the central meridian to 0.9996 (a reduction of 1:2500), the amount of distortion is held below 1 part in 1,000 inside each zone. Distortion of scale increases to 1.0010 at the zone boundaries along the equator.
In each zone the scale factor of the central meridian reduces the diameter of the transverse cylinder to produce a secant projection with two standard lines, or lines of true scale, about 180 km on each side of, and about parallel to, the central meridian (Arc cos 0.9996 = 1.62° at the Equator). The scale is less than 1 inside the standard lines and greater than 1 outside them, but the overall distortion is minimized.
Exceptions
The UTM zones are uniform across the globe, except in two areas. On the southwest coast of Norway, zone 32 is extended 3° further west, and zone 31 is correspondingly shrunk to cover only open water. Also, in the region around Svalbard, the zones 32, 34 and 36 are not used, while zones 31 (9° wide), 33 (12° wide), 35 (12° wide), and 37 (9° wide) are extended to cover the gaps.
Overlapping grids
Distortion of scale increases in each UTM zone as the boundaries between the UTM zones are approached. However, it is often convenient or necessary to measure a series of locations on a single grid when some are located in two adjacent zones. Around the boundaries of large scale maps (1:100,000 or larger) coordinates for both adjoining UTM zones are usually printed within a minimum distance of 40 km on either side of a zone boundary. Ideally, the coordinates of each position should be measured on the grid for the zone in which they are located, but because the scale factor is still relatively small near zone boundaries, it is possible to overlap measurements into an adjoining zone for some distance when necessary.
Latitude bands
Latitude bands are not a part of UTM, but rather a part of the military grid reference system (MGRS). They are however sometimes included in UTM notation. Including latitude bands in UTM notation can lead to ambiguous coordinates—as the letter "S" either refers to the southern hemisphere or a latitude band in the northern hemisphere—and should therefore be avoided.
Locating a position using UTM coordinates
A position on the Earth is given by the UTM zone number and hemisphere designator and the easting and northing planar coordinate pair in that zone.
The point of origin of each UTM zone is the intersection of the equator and the zone's central meridian. To avoid dealing with negative numbers, a false Easting of meters is added to the central meridian. Thus a point that has an easting of meters is about 100 km west of the central meridian. For most such points, the true distance would be slightly more than 100 km as measured on the surface of the Earth because of the distortion of the projection. UTM eastings range from about meters to meters at the equator.
In the northern hemisphere positions are measured northward from zero at the equator. The maximum "northing" value is about meters at latitude 84 degrees North, the north end of the UTM zones. The southern hemisphere's northing at the equator is set at meters. Northings decrease southward from these meters to about meters at 80 degrees South, the south end of the UTM zones. Therefore, no point has a negative northing value.
For example, the CN Tower is at , which is in UTM zone 17, and the grid position is east, north. Two points in Zone 17 have these coordinates, one in the northern hemisphere and one in the south; the non-ambiguous format is to specify the full zone and hemisphere designator, that is, "17N 630084 4833438".
Simplified formulae
These formulae are truncated version of Transverse Mercator: flattening series, which were originally derived by Johann Heinrich Louis Krüger in 1912. They are accurate to around a millimeter within of the central meridian. Concise commentaries for their derivation have also been given.
The WGS 84 spatial reference system describes Earth as an oblate spheroid along north-south axis with an equatorial radius of km and an inverse flattening of . Let's take a point of latitude and of longitude and compute its UTM coordinates as well as point scale factor and meridian convergence using a reference meridian of longitude . By convention, in the northern hemisphere km and in the southern hemisphere km. By convention also and km.
In the following formulas, the distances are in kilometers. First, here are some preliminary values:
From latitude, longitude (φ, λ) to UTM coordinates (E, N)
First we compute some intermediate values:
The final formulae are:
where is Easting, is Northing, is the Scale Factor, and is the Grid Convergence.
From UTM coordinates (E, N, Zone, Hemi) to latitude, longitude (φ, λ)
Note: Hemi = +1 for Northern, Hemi = −1 for Southern
First let's compute some intermediate values:
The final formulae are:
See also
European Terrestrial Reference System 1989 (ETRS89)
Military grid reference system, a variant of UTM designed to simplify transfer of coordinates.
Modified transverse Mercator, a variation of UTM used in Canada with zones spaced 3° of longitude apart as opposed to UTM's 6°.
Transverse Mercator projection, the map projection used by UTM.
Universal Polar Stereographic coordinate system, used at the North and South poles.
Open Location Code, a hierarchical zoned system
MapCode, a hierarchical zoned system
References
External links
UTM coordinates in Google Maps.
Understanding the UTM projection.
Further reading
Geographic coordinate systems
Cartography
Geodesy | Universal Transverse Mercator coordinate system | [
"Mathematics"
] | 1,997 | [
"Geographic coordinate systems",
"Applied mathematics",
"Geodesy",
"Coordinate systems"
] |
10,361,630 | https://en.wikipedia.org/wiki/Affine%20Hecke%20algebra | In mathematics, an affine Hecke algebra is the algebra associated to an affine Weyl group, and can be used to prove Macdonald's constant term conjecture for Macdonald polynomials.
Definition
Let be a Euclidean space of a finite dimension and an affine root system on . An affine Hecke algebra is a certain associative algebra that deforms the group algebra of the Weyl group of (the affine Weyl group). It is usually denoted by , where is multiplicity function that plays the role of deformation parameter. For the affine Hecke algebra indeed reduces to .
Generalizations
Ivan Cherednik introduced generalizations of affine Hecke algebras, the so-called double affine Hecke algebra (usually referred to as DAHA). Using this he was able to give a proof of Macdonald's constant term conjecture for Macdonald polynomials (building on work of Eric Opdam). Another main inspiration for Cherednik to consider the double affine Hecke algebra was the quantum KZ equations.
References
Algebras
Representation theory | Affine Hecke algebra | [
"Mathematics"
] | 216 | [
"Mathematical structures",
"Algebras",
"Fields of abstract algebra",
"Algebraic structures",
"Representation theory"
] |
10,364,657 | https://en.wikipedia.org/wiki/Coreset | In computational geometry, a coreset is a small set of points that approximates the shape of a larger point set, in the sense that applying some geometric measure to the two sets (such as their minimum bounding box volume) results in approximately equal numbers. Many natural geometric optimization problems have coresets that approximate an optimal solution to within a factor of , that can be found quickly (in linear time or near-linear time), and that have size bounded by a function of independent of the input size, where is an arbitrary positive number. When this is the case, one obtains a linear-time or near-linear time approximation scheme, based on the idea of finding a coreset and then applying an exact optimization algorithm to the coreset. Regardless of how slow the exact optimization algorithm is, for any fixed choice of , the running time of this approximation scheme will be plus the time to find the coreset.
References
Computational geometry | Coreset | [
"Mathematics"
] | 190 | [
"Computational geometry",
"Computational mathematics"
] |
10,367,166 | https://en.wikipedia.org/wiki/Hydrometry | Hydrometry is the monitoring of the components of the hydrological cycle including rainfall, groundwater characteristics, as well as water quality and flow characteristics of surface waters. The etymology of the term hydrometry is from () 'water' + () 'measure'.
Hydrometrics is a topic in applied science and engineering dealing with Hydrometry. It is an engineering discipline encompassing several different areas. This discipline is primarily related to hydrology but specializing in the measurement of components of the hydrological cycle particularly the bulk quantification of water resources. It encompasses several areas of traditional engineering practices including hydrology, structures, control systems, computer sciences, data management and communications. The International Organization for Standardization formally defines hydrometry as "science of the measurement of water including the methods, techniques and instrumentation used".
See also
https://hydro-metrics.com/
References
Hydrology | Hydrometry | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 178 | [
"Hydrology",
"Hydrology stubs",
"Environmental engineering"
] |
10,373,988 | https://en.wikipedia.org/wiki/NGC%204261 | NGC 4261 is an elliptical galaxy located around 100 million light-years away in the constellation Virgo. It was discovered April 13, 1784, by the German-born astronomer William Herschel. The galaxy is a member of its own somewhat meager galaxy group known as the NGC 4261 group, which is part of the Virgo Cluster.
The morphological classification of this galaxy is E2, indicating an elliptical galaxy with a 5:4 ratio between the major and minor axes. The stellar population of the galaxy is old, showing no indications of recent mergers or interactions with other members of its group. Large-scale isophotes of the galaxy are generally boxy in form, with no markers that would suggest a disruptive interaction within the last billion years. There is a dust lane along the north–south axis of the galaxy and a disk of dust around the nucleus.
Two prominent jets emanating from the nucleus can be observed in the radio band. It has an active galactic nucleus with a supermassive black hole somewhat offset from the center of the core. The black hole is estimated to have a mass of . The galaxy is estimated to be about 60 thousand light-years across, and a jet emanating from it is estimated to span about 88 thousand light-years.
A Type Ia supernova event in this galaxy was reported on January 1, 2001. It was designated SN 2001A, marking the first supernova discovery of the year. The position of the event was west and north of the galactic nucleus. It reached magnitude 18.4 on December 15 of the previous year.
Gallery
References
External links
Animation of the black hole in the center of NGC 4261
Elliptical galaxies
Supermassive black holes
Radio galaxies
Virgo Cluster
Virgo (constellation)
4261
07360
39659
270 | NGC 4261 | [
"Physics",
"Astronomy"
] | 370 | [
"Black holes",
"Unsolved problems in physics",
"Supermassive black holes",
"Virgo (constellation)",
"Constellations"
] |
10,375,543 | https://en.wikipedia.org/wiki/LLT%20polynomial | In mathematics, an LLT polynomial is one of a family of symmetric functions introduced as q-analogues of products of Schur functions.
J. Haglund, M. Haiman, and N. Loehr showed how to expand Macdonald polynomials in terms of LLT polynomials. Ian Grojnowski and Mark Haiman proved a positivity conjecture for LLT polynomials that combined with the previous result implies the Macdonald positivity conjecture for Macdonald polynomials, and extended the definition of LLT polynomials to arbitrary finite root systems.
References
I. Grojnowski, M. Haiman, Affine algebras and positivity (preprint available here)
J. Haglund, M. Haiman, N. Loehr A Combinatorial Formula for Macdonald Polynomials J. Amer. Math. Soc. 18 (2005), no. 3, 735–761
Alain Lascoux, Bernard Leclerc, and Jean-Yves Thibon Ribbon Tableaux, Hall-Littlewood Functions, Quantum Affine Algebras and Unipotent Varieties J. Math. Phys. 38 (1997), no. 2, 1041–1068.
Symmetric functions
Algebraic geometry
Algebraic combinatorics
Q-analogs
Polynomials | LLT polynomial | [
"Physics",
"Mathematics"
] | 258 | [
"Algebra",
"Polynomials",
"Combinatorics",
"Symmetric functions",
"Fields of abstract algebra",
"Algebraic geometry",
"Algebraic combinatorics",
"Q-analogs",
"Symmetry"
] |
4,645,173 | https://en.wikipedia.org/wiki/Bismuth-209 | Bismuth-209 (Bi) is an isotope of bismuth, with the longest known half-life of any radioisotope that undergoes α-decay (alpha decay). It has 83 protons and a magic number of 126 neutrons, and an atomic mass of 208.9803987 amu (atomic mass units). Primordial bismuth consists entirely of this isotope.
Decay properties
Bismuth-209 was long thought to have the heaviest stable nucleus of any element, but in 2003, a research team at the Institut d’Astrophysique Spatiale in Orsay, France, discovered that Bi undergoes alpha decay with a half-life of 20.1 exayears (2.01×10, or 20.1 quintillion years), over 10 times longer than the estimated age of the universe. The heaviest nucleus considered to be stable is now lead-208 and the heaviest stable monoisotopic element is gold (gold-197).
Theory had previously predicted a half-life of 4.6 years. It had been suspected to be radioactive for a long time. The decay produces a 3.14 MeV alpha particle plus thallium-205.
Bismuth-209 forms Tl:
→ +
If perturbed, it would join in lead-bismuth neutron capture cycle from lead-206/207/208 to bismuth-209, despite low capture cross sections. Even thallium-205, the decay product of bismuth-209, reverts to lead when fully ionized.
Due to its hugely long half-life, for nearly all applications Bi can be treated as non-radioactive. It is much less radioactive than human flesh, so it poses no real radiation hazard. Though Bi holds the half-life record for alpha decay, it does not have the longest known half-life of any nuclide; this distinction belongs to tellurium-128 (Te) with a half-life estimated at 7.7 × 10 years by double β-decay (double beta decay).
The half-life of Bi was confirmed in 2012 by an Italian team in Gran Sasso who reported years. They also reported an even longer half-life for alpha decay of Bi to the first excited state of Tl (at 204 keV), was estimated at 1.66 years. Even though this value is shorter than the half-life of Te, both alpha decays of Bi hold the record of the thinnest natural line widths of any measurable physical excitation, estimated respectively at ΔΕ~5.5×10 eV and ΔΕ~1.3×10 eV in application of the uncertainty principle (double beta decay would produce energy lines only in neutrinoless transitions, which has not been observed yet).
Applications
Because all primordial bismuth is bismuth-209, bismuth-209 is used for all normal applications of bismuth, such as being used as a replacement for lead, in cosmetics, in paints, and in several medicines such as Pepto-Bismol. Alloys containing bismuth-209 such as bismuth bronze have been used for thousands of years.
Synthesis of other elements
Po can be manufactured by bombarding Bi with neutrons in a nuclear reactor. Only around 100 grams of Po are produced each year. Po and Po can be made through the proton bombardment of Bi in a cyclotron. Astatine can also be produced by bombarding Bi with alpha particles. Traces of Bi have also been used to create gold in nuclear reactors.
Bi has been used as a target for the creation of several isotopes of superheavy elements such as dubnium, bohrium, meitnerium, roentgenium, and nihonium.
Formation
Primordial
In the red giant stars of the asymptotic giant branch, the s-process (slow process) is ongoing to produce bismuth-209 and polonium-210 by neutron capture as the heaviest elements to be formed, and the latter quickly decays. All elements heavier than it are formed in the r-process, or rapid process, which occurs during the first fifteen minutes of supernovas. Bismuth-209 is also created during the r-process.
Radiogenic
Some Bi was created radiogenically from the neptunium decay chain. Neptunium-237 is an extinct radionuclide, but it can be found in traces in uranium ores because of neutron capture reactions. Americium-241, which is used in smoke detectors, decays to neptunium-237.
See also
Isotopes of bismuth
Primordial radionuclide
List of elements by stability of isotopes
Notes
References
Bismuth
Isotopes of bismuth | Bismuth-209 | [
"Chemistry"
] | 992 | [
"Isotopes of bismuth",
"Isotopes"
] |
4,647,025 | https://en.wikipedia.org/wiki/Central%20Electro%20Chemical%20Research%20Institute | Central Electro Chemical Research Institute is one of a chain of forty national laboratories under the aegis of the Council of Scientific and Industrial Research (CSIR) in New Delhi. Founded on 25 July 1948 at Karaikudi in Tamil Nadu, CECRI came into existence in January 1953.
Regional Centres
CECRI Chennai Unit, CSIR Complex, TTTI, Taramani, Chennai
CECRI Corrosion Research Centre, Mandapam Camp
References
External links
CECRI Homepage
CECRI B.Tech Alumni Association
C. Jaishankar "", The Hindu, 31 August 2008.
Chemical research institutes
Materials science institutes
Council of Scientific and Industrial Research
Metallurgical industry in India
Research institutes in Tamil Nadu
Education in Sivaganga district
Karaikudi
Research institutes established in 1948
1948 establishments in India | Central Electro Chemical Research Institute | [
"Chemistry",
"Materials_science"
] | 164 | [
"Metallurgical industry in India",
"Materials science organizations",
"Chemical research institutes",
"Materials science institutes",
"Metallurgical industry by country",
"Chemistry organization stubs"
] |
4,649,063 | https://en.wikipedia.org/wiki/Micromechanics | Micromechanics (or, more precisely, micromechanics of materials) is the analysis of heterogeneous materials including of composite, and anisotropic and orthotropic materials on the level of the individual constituents that constitute them and their interactions.
Aims of micromechanics of materials
Heterogeneous materials, such as composites, solid foams, polycrystals, or bone, consist of clearly distinguishable constituents (or phases) that show different mechanical and physical material properties. While the constituents can often be modeled as having isotropic behaviour, the microstructure characteristics (shape, orientation, varying volume fraction, ..) of heterogeneous materials often leads to an anisotropic behaviour.
Anisotropic material models are available for linear elasticity. In the nonlinear regime, the modeling is often restricted to orthotropic material models which do not capture the physics for all heterogeneous materials. An important goal of micromechanics is predicting the anisotropic response of the heterogeneous material on the basis of the geometries and properties of the individual phases, a task known as homogenization.
Micromechanics allows predicting multi-axial responses that are often difficult to measure experimentally. A typical example is the out-of-plane properties for unidirectional composites.
The main advantage of micromechanics is to perform virtual testing in order to reduce the cost of an experimental campaign. Indeed, an experimental campaign of heterogeneous material is often expensive and involves a larger number of permutations: constituent material combinations; fiber and particle volume fractions; fiber and particle arrangements; and processing histories). Once the constituents properties are known, all these permutations can be simulated through virtual testing using micromechanics.
There are several ways to obtain the material properties of each constituent: by identifying the behaviour based on molecular dynamics simulation results; by identifying the behaviour through an experimental campaign on each constituent; by reverse engineering the properties through a reduced experimental campaign on the heterogeneous material. The latter option is typically used since some constituents are difficult to test, there are always some uncertainties on the real microstructure and it allows to take into account the weakness of the micromechanics approach into the constituents material properties. The obtained material models need to be validated through comparison with a different set of experimental data than the one use for the reverse engineering.
Generality on micromechanics
A key point of micromechanics of materials is the localization, which aims at evaluating the local (stress and strain) fields in the phases for given macroscopic load states, phase properties, and phase geometries. Such knowledge is especially important in understanding and describing material damage and failure.
Because most heterogeneous materials show a statistical rather than a deterministic arrangement of the constituents, the methods of micromechanics are typically based on the concept of the representative volume element (RVE). An RVE is understood to be a sub-volume of an inhomogeneous medium that is of sufficient size for providing all geometrical information necessary for obtaining an appropriate homogenized behavior.
Most methods in micromechanics of materials are based on continuum mechanics rather than on atomistic approaches such as nanomechanics or molecular dynamics. In addition to the mechanical responses of inhomogeneous materials, their thermal conduction behavior and related problems can be studied with analytical and numerical continuum methods. All these approaches may be subsumed under the name of "continuum micromechanics".
Analytical methods of continuum micromechanics
Voigt (1887) - Strains constant in composite, rule of mixtures for stiffness components.
Reuss (1929) - Stresses constant in composite, rule of mixtures for compliance components.
Strength of Materials (SOM) - Longitudinally: strains constant in composite, stresses volume-additive. Transversely: stresses constant in composite, strains volume-additive.
Vanishing Fiber Diameter (VFD) - Combination of average stress and strain assumptions that can be visualized as each fiber having a vanishing diameter yet finite volume.
Composite Cylinder Assemblage (CCA) - Composite composed of cylindrical fibers surrounded by cylindrical matrix layer, cylindrical elasticity solution. Analogous method for macroscopically isotropic inhomogeneous materials: Composite Sphere Assemblage (CSA)
Hashin-Shtrikman Bounds - Provide bounds on the elastic moduli and tensors of transversally isotropic composites (reinforced, e.g., by aligned continuous fibers) and isotropic composites (reinforced, e.g., by randomly positioned particles).
Self-Consistent Schemes - Effective medium approximations based on Eshelby's elasticity solution for an inhomogeneity embedded in an infinite medium. Uses the material properties of the composite for the infinite medium.
Mori-Tanaka Method - Effective field approximation based on Eshelby's elasticity solution for inhomogeneity in infinite medium. As is typical for mean field micromechanics models, fourth-order concentration tensors relate the average stress or average strain tensors in inhomogeneities and matrix to the average macroscopic stress or strain tensor, respectively; inhomogeneity "feels" effective matrix fields, accounting for phase interaction effects in a collective, approximate way.
Numerical approaches to continuum micromechanics
Methods based on Finite Element Analysis (FEA)
Most such micromechanical methods use periodic homogenization, which approximates composites by periodic phase arrangements. A single repeating volume element is studied, appropriate boundary conditions being applied to extract the composite's macroscopic properties or responses. The Method of Macroscopic Degrees of Freedom can be used with commercial FE codes, whereas analysis based on asymptotic homogenization typically requires special-purpose codes.
The Variational Asymptotic Method for Unit Cell Homogenization (VAMUCH) and its development, Mechanics of Structural Genome (see below), are recent Finite Element based approaches for periodic homogenization. A general introduction to Computational Micromechanics can be found in Zohdi and Wriggers (2005).
In addition to studying periodic microstructures, embedding models and analysis using macro-homogeneous or mixed uniform boundary conditions can be carried out on the basis of FE models. Due to its high flexibility and efficiency, FEA at present is the most widely used numerical tool in continuum micromechanics, allowing, e.g., the handling of viscoelastic, elastoplastic and damage behavior.
Mechanics of Structure Genome (MSG)
A unified theory called mechanics of structure genome (MSG) has been introduced to treat structural modeling of anisotropic heterogeneous structures as special applications of micromechanics. Using MSG, it is possible to directly compute structural properties of a beam, plate, shell or 3D solid in terms of its microstructural details.
Generalized Method of Cells (GMC)
Explicitly considers fiber and matrix subcells from periodic repeating unit cell. Assumes 1st-order displacement field in subcells and imposes traction and displacement continuity. It was developed into the High-Fidelity GMC (HFGMC), which uses quadratic approximation for the displacement fields in the subcells.
Fast Fourier Transforms (FFT)
A further group of periodic homogenization models make use of Fast Fourier Transforms (FFT), e.g., for solving an equivalent to the Lippmann–Schwinger equation. FFT-based methods at present appear to provide the numerically most efficient approach to periodic homogenization of elastic materials.
Volume Elements
Ideally, the volume elements used in numerical approaches to continuum micromechanics should be sufficiently big to fully describe the statistics of the phase arrangement of the material considered, i.e., they should be Representative Volume Elements (RVEs).
In practice, smaller volume elements must typically be used due to limitations in available computational power. Such volume elements are often referred to as Statistical Volume Elements (SVEs). Ensemble averaging over a number of SVEs may be used for improving the approximations to the macroscopic responses.
See also
Micromechanics of Failure
Eshelby's inclusion
Representative elementary volume
Composite material
Metamaterial
Negative index metamaterials
John Eshelby
Rodney Hill
Zvi Hashin
References
Further reading
External links
Micromechanics of Composites (Wikiversity learning project)
Composite materials | Micromechanics | [
"Physics"
] | 1,753 | [
"Materials",
"Composite materials",
"Matter"
] |
4,649,186 | https://en.wikipedia.org/wiki/Finery%20forge | A finery forge is a forge used to produce wrought iron from pig iron by decarburization in a process called "fining" which involved liquifying cast iron in a fining hearth and removing carbon from the molten cast iron through oxidation. Finery forges were used as early as the 3rd century BC in China. The finery forge process was replaced by the puddling process and the roller mill, both developed by Henry Cort in 1783–4, but not becoming widespread until after 1800.
History
A finery forge was used to refine wrought iron at least by the 3rd century BC in ancient China, based on the earliest archaeological specimens of cast and pig iron fined into wrought iron and steel found at the early Han dynasty (202 BC – 220 AD) site at Tieshengguo. Pigott speculates that the finery forge existed in the previous Warring States period (403–221 BC), because of the wrought iron items from China dating to that period and there was no documented evidence of the bloomery ever being used in China. Wagner writes that in addition to the Han dynasty hearths believed to be fining hearths, there is also pictorial evidence of the fining hearth from a Shandong tomb mural dated 1st to 2nd century AD, as well as a hint of written evidence in the 4th century AD Daoist text Taiping Jing.
In Europe, the concept of the finery forge may have been evident as early as the 13th century. However, it was perhaps not capable of being used to fashion plate armor until the 15th century, as described in conjunction with the waterwheel-powered blast furnace by the Florentine Italian engineer Antonio Averlino (c. 1400 - 1469). The finery forge process began to be replaced in Europe from the late 18th century by others, of which puddling was the most successful, though some continued in use through the mid-19th century. The new methods used mineral fuel (coal or coke), and freed the iron industry from its dependence on wood to make charcoal.
Types
There were several types of finery forges.
German forge
The dominant type in Sweden was the German forge, which had a single hearth that was used for all processes.
Walloon forge
In Swedish Uppland north of Stockholm and certain adjacent provinces, another kind known as the Walloon forge was used, mainly for the production of a particularly pure kind of iron known as oregrounds iron, which was exported to England to make blister steel. Its purity depended on the use of ore from the Dannemora mine. The Walloon forge was virtually the only kind used in Great Britain.
The forge had two kinds of hearths, the finery to finish the product and the chafery to reheat the bloom that was the raw material of the process.
Lancashire forge
Process
In the finery, a workman known as the "finer" remelted pig iron so as to oxidise the carbon (and silicon). This produced a lump of iron (with some slag) known as a bloom. This was consolidated using a water-powered hammer (see trip hammer) and returned to the finery.
The next stages were undertaken by the "hammerman", who in some iron-making areas such as South Yorkshire was also known as the "stringsmith", who heated his iron in a string-furnace. Because the bloom is highly porous, and its open spaces are full of slag, the hammerman's or stringsmith's tasks were to beat (work) the heated bloom with a hammer to drive the molten slag out of it, and then to draw the product out into a bar to produce what was known as anconies or bar iron. In order to do this, he had to reheat the iron, for which he used the chafery. The fuel used in the finery had to be charcoal (later coke), as impurities in any mineral fuel would affect the quality of the iron.
Slag
The waste product was allowed to cool in the hearth and removed as a "mosser". In the Furness district they were often left as the capstone of a wall, particularly near Spark Bridge and Nibthwaite forges.
References
Sources
H. Schubert, History of British Iron and Steel Industry c.450 BC to AD 1775 (1957), 272–291.
A. den Ouden, "The Production of Wrought Iron in Finery Hearths", Historical Metallurgy 15(2) (1981), 63–87 and 16(1) (1982), 29–33.
K-G. Hildebrand, Swedish Iron in the Seventeenth and Eighteenth Centuries: Export Industry Before Industrialization (Stockholm 1992).
P. King, 'The Cartel in Oregrounds Iron: Trading in the Raw Material for Steel During the 18th century", Journal of Industrial History 6 (2003), 25–48.
Steelmaking
Belgian inventions
Chinese inventions
German inventions
Iron
Metallurgical processes
Metalworking | Finery forge | [
"Chemistry",
"Materials_science"
] | 1,023 | [
"Metallurgical processes",
"Steelmaking",
"Metallurgy"
] |
4,649,290 | https://en.wikipedia.org/wiki/Fluorosilicate%20glass | Fluorosilicate glass (FSG) is a glass material composed primarily of fluorine, silicon and oxygen. It has a number of uses in industry and manufacturing, especially in semiconductor fabrication where it forms an insulating dielectric. The related fluorosilicate glass-ceramics have good mechanical and chemical properties.
Semiconductor fabrication
FSG has a small relative dielectric constant (low-κ dielectric) and is used in between metal copper interconnect layers during silicon integrated circuit fabrication process. It is widely used by semiconductor fabrication plants on geometries under 0.25 microns (μ). FSG is effectively a fluorine-containing silicon dioxide (κ=3.5, while κ of undoped silicon dioxide is 3.9). FSG is used by IBM. Intel started using Cu metal layers and FSG on its 1.2 GHz Pentium processor at 130 nm complementary metal–oxide–semiconductor (CMOS). Taiwan Semiconductor Manufacturing Company (TSMC) combined FSG and copper in the Altera APEX.
Fluorosilicate glass-ceramics
Fluorosilicate glass-ceramics are crystalline or semi-crystalline solids formed by careful cooling of molten fluorosilicate glass. They have good mechanical properties.
Potassium fluororichterite based materials are composed from tiny interlocked rod-shaped amphibole crystals; they have good resistance to chemicals and can be used in microwave ovens. Richterite glass-ceramics are used for high-performance tableware.
Fluorosilicate glass-ceramics with sheet structure, derived from mica, are strong and machinable. They find a number of uses and can be used in high vacuum and as dielectrics and precision ceramic components. A number of mica and mica-fluoroapatite glass-ceramics were studied as biomaterials.
See also
Fluoride glass
Glass
Silicate
References
Silicates
Glass compositions
Integrated circuits
Semiconductor fabrication materials
Biomaterials | Fluorosilicate glass | [
"Physics",
"Chemistry",
"Technology",
"Engineering",
"Biology"
] | 406 | [
"Biomaterials",
"Glass chemistry",
"Integrated circuits",
"Computer engineering",
"Glass compositions",
"Biotechnology stubs",
"Materials",
"Medical technology stubs",
"Matter",
"Medical technology"
] |
4,649,353 | https://en.wikipedia.org/wiki/Citrullination | Citrullination or deimination is the conversion of the amino acid arginine in a protein into the amino acid citrulline. Citrulline is not one of the 20 standard amino acids encoded by DNA in the genetic code. Instead, it is the result of a post-translational modification. Citrullination is distinct from the formation of the free amino acid citrulline as part of the urea cycle or as a byproduct of enzymes of the nitric oxide synthase family.
Enzymes called arginine deiminases (ADIs) catalyze the deimination of free arginine, while protein arginine deiminases or peptidylarginine deiminases (PADs) replace the primary ketimine group (>C=NH) by a ketone group (>C=O). Arginine is positively charged at a neutral pH, whereas citrulline has no net charge. This increases the hydrophobicity of the protein, which can lead to changes in protein folding, affecting the structure and function.
The immune system can attack citrullinated proteins, leading to autoimmune diseases such as rheumatoid arthritis (RA) and multiple sclerosis (MS). Fibrin and fibrinogen may be favored sites for arginine deimination within rheumatoid joints. Test for presence of anti-citrullinated protein (ACP) antibodies are highly specific (88–96%) for rheumatoid arthritis, about as sensitive as rheumatoid factor (70–78%) for diagnosis of RA, and are detectable from even before the onset of clinical disease.
Citrullinated vimentin may be an autoantigen in RA and other autoimmune diseases, and is used to study RA. Moreover, antibodies against mutated citrullinated vimentin (MCV) may be useful for monitoring effects of RA therapy. An ELISA system utilises genetically modified citrullinated vimentin (MCV), a naturally occurring isoform of vimentin to improve the performance of the test.
In the reaction from arginine to citrulline, one of the terminal nitrogen atoms of the arginine side chain is replaced by an oxygen. Thus, arginine's positive charge (at physiological pH) is removed, altering the protein's tertiary structure. The reaction uses one water molecule and yields ammonia as a side-product:
PAD subtypes
PADs are found in chordates but not in lower animals. In mammals five PAD isotypes – PAD1, PAD2, PAD3, PAD4 and PAD6 – have been found. PAD5 was thought to be a unique isotype in humans, however it was shown to be homologous to PAD4. These isotypes differ in terms of their tissue and cellular distributions.
PAD1 expression has been detected in epidermis and the uterus, and it acts in citrullination of keratin and filaggrin, key components of keratinocytes.
PAD2 is expressed at a high level in the central nervous system (CNS), including the eye and brain, as well as skeletal muscle and the spleen. PAD transcripts have been found in the C57BL6/J mouse eyes as early as embryonic day 14.5. PAD2 has also been shown to interact with vimentin in skeletal muscle and macrophages, causing the filaments to disassemble, suggesting a role in apoptosis.
One of PAD2's target substrates is myelin basic protein (MBP). In the normal retina, deimination is found in nearly all the retinal layers, including the photoreceptors. Deimination has been also reported in neuronal cells, such as astrocytes, microglia and oligodendrocytes, Schwann cells and neurons. Methylation and phosphorylation of MBP are active during the process of myelinogenesis. In early CNS development of the embryo, MBP deimination plays a major role in myelin assembly. In adults, MBP deimination is found in demyelination diseases such as multiple sclerosis. MBP may affect different cell types in each case.
PAD3 expression has been linked to sheep wool modification. Citrullination of trichohyalin allows it to bind and cross-link keratin filaments, directing growth of the wool fiber.
PAD4 regulates gene expression through histone modifications. DNA is wrapped around histones, and the histone proteins can control DNA expression when chemical groups are added and removed. This process is known as post-translational processing or post-translational modification, because it takes place on the protein after the DNA is translated. The role of post-translational processing in gene regulation is the subject of the growing field of study, epigenetics. One modification mechanism is methylation. A methyl group (CH3) binds to an arginine on the histone protein, altering DNA binding to the histone and allowing transcription to take place. When PAD converts arginine to citrulline on a histone, it blocks further methylation of the histone, inhibiting transcription. The main isotype for this is PAD4, which deiminates arginines and/or monomethylated arginines on histones 3 and 4, turning off the effects of arginine methylation.
Autoimmune diseases
In rheumatoid arthritis and other autoimmune diseases, such as psoriatic arthritis, systemic lupus erythematosus and Sjögren's syndrome, autoantibodies often attack citrullinated proteins. The presence of anti-citrullinated protein antibody is a standard test for rheumatoid arthritis, and it is associated with more severe disease. Citrullinated proteins are also found in the cellular debris accompanying the destruction of cells in alzheimer disease, and after smoking cigarettes. So citrullination seems to be part of the mechanism that stimulates the immune system in autoimmune disease. However, citrullinated proteins can also be found in healthy colon mucosa.
The first comprehensive textbook on deimination was published in 2014.
Detection of citrullinated peptides and proteins
Citrullinated peptides and proteins can be detected using antibodies targeting the citrullinated residues, or detected using mass spectrometry-based proteomics technologies. Citrullination of arginine results in a monoisotopic mass increase of +0.984016 Da, which can be measured with mass spectrometry. The mass shift is close to the mass difference between the different peptide isotopes of +1.008665 which can be mistaken for a citrullinated peptide, especially on low-resolution instruments. However, this is less of an issue with modern high resolution/high accuracy mass spectrometers. Furthermore, the mass shift is identical to the mass shift caused by deamidation of the amino acid asparagine or glutamine side chain, which are common modifications.
Citrulline residues can be chemically modified with butanedione or by biotinylation prior to analysis, leading to a different mass shift, and this strategy has successfully been used to facilitate identification by mass spectrometry.
Another approach is to utilize the neutral loss of isocyanic acid (HNCO) from citrulline residues when submitted to low energy collision induced dissociation fragmentation in mass spectrometers. The loss causes a mass shift of −43.0058 Da, which can be utilized by mass spectrometers to predominantly select citrullinated peptides for fragmentation (sequencing).
Finally, the loss of positive charge at physiological pH caused by citrullination can be utilized. Prior to bottom-up proteomics analysis, proteins are enzymatically cleaved into peptides. Commonly the protease trypsin is used, which cleaves after the positively charged arginine and lysine residues. However, trypsin is unable to cleave after a citrulline residue which is neutral. A missed cleavage after a citrulline residue together with the correct mass shift can be used as a specific and sensitive marker for citrullination, and the strategy is compatible with standard bottom-up proteomics workflows.
References
Post-translational modification
Protein structure | Citrullination | [
"Chemistry"
] | 1,783 | [
"Gene expression",
"Biochemical reactions",
"Post-translational modification",
"Structural biology",
"Protein structure"
] |
4,649,499 | https://en.wikipedia.org/wiki/Housewrap | Housewrap (or house wrap), also known by the genericized trademark homewrap (or home wrap), generally denotes a modern synthetic material used to protect buildings. Housewrap functions as a weather-resistant barrier, preventing rain or other forms of moisture from getting into the wall assembly while allowing water vapor to pass to the exterior. If moisture from either direction is allowed to build up within stud or cavity walls, mold and rot can set in and fiberglass or cellulose insulation will lose its R-value because of the heat-conducting moisture. House wrap may also serve as an air barrier if it is sealed carefully at seams.
Housewrap is a replacement for the older tar paper or asphalt saturated felt on walls. It is lighter in weight, available in much wider rolls, and both faster and easier to apply.
Major types
Nonwoven fabric
Micro-perforated, cross-lapped films
Films laminated to spunbond nonwovens (Typar or CertaWrap)
Films laminated or coated to polypropylene wovens
Supercalendered, wetlaid polyethylene fibril nonwoven ("Tyvek")
Installation
Housewrap is installed between the sheathing and the exterior siding, and is used behind vinyl, wood clapboards, shingles or shakes, brick, and other building materials. In all cases, the housewrap helps prevent water intrusion when moisture in any form gets past the siding and its trim and caulking.
As such, housewrap must be both water shedding and have a high moisture vapor transmission rate (MVTR) to be effective. It must also withstand abuse during installation, and because housewrap is often left exposed for some time before being cladded-over, it must
hold up to wind and resist UV. Some new designs must be installed carefully or they will slightly rip or tear during installation, possibly allowing for water infiltration at the damaged areas. Being both thin and inelastic, most newer designs do not "self-seal" well against nails or staples like asphalt products.
Properties
Typical MVTR ~200 grams/100 square-inches/24hours (or greater, i.e., Tyvek is ~400)
Typical 2 ounces/square-yard (varies greatly with manufacturer)
Typical width 9' (108)" on a 3" core
References
Moisture protection
Building materials | Housewrap | [
"Physics",
"Engineering"
] | 495 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
4,649,761 | https://en.wikipedia.org/wiki/Iterated%20monodromy%20group | In geometric group theory and dynamical systems the iterated monodromy group of a covering map is a group describing the monodromy action of the fundamental group on all iterations of the covering. A single covering map between spaces is therefore used to create a tower of coverings, by placing the covering over itself repeatedly. In terms of the Galois theory of covering spaces, this construction on spaces is expected to correspond to a construction on groups. The iterated monodromy group provides this construction, and it is applied to encode the combinatorics and symbolic dynamics of the covering, and provide examples of self-similar groups.
Definition
The iterated monodromy group of f is the following quotient group:
where :
is a covering of a path-connected and locally path-connected topological space X by its subset ,
is the fundamental group of X and
is the monodromy action for f.
is the monodromy action of the iteration of f, .
Action
The iterated monodromy group acts by automorphism on the rooted tree of preimages
where a vertex is connected by an edge with .
Examples
Iterated monodromy groups of rational functions
Let :
f be a complex rational function
be the union of forward orbits of its critical points (the post-critical set).
If is finite (or has a finite set of accumulation points), then the iterated monodromy group of f is the iterated monodromy group of the covering , where is the Riemann sphere.
Iterated monodromy groups of rational functions usually have exotic properties from the point of view of classical group theory. Most of them are infinitely presented, many have intermediate growth.
IMG of polynomials
The Basilica group is the iterated monodromy group of the polynomial
See also
Growth rate (group theory)
Amenable group
Complex dynamics
Julia set
References
Volodymyr Nekrashevych, Self-Similar Groups, Mathematical Surveys and Monographs Vol. 117, Amer. Math. Soc., Providence, RI, 2005; .
Kevin M. Pilgrim, Combinations of Complex Dynamical Systems, Springer-Verlag, Berlin, 2003; .
External links
arXiv.org - Iterated Monodromy Group - preprints about the Iterated Monodromy Group.
Laurent Bartholdi's page - Movies illustrating the Dehn twists about a Julia set.
mathworld.wolfram.com - The Monodromy Group page.
Geometric group theory
Homotopy theory
Complex analysis | Iterated monodromy group | [
"Physics"
] | 508 | [
"Geometric group theory",
"Group actions",
"Symmetry"
] |
4,652,142 | https://en.wikipedia.org/wiki/Fusible%20plug | A fusible plug is a threaded cylinder of metal, usually bronze, brass or gunmetal, with a tapered hole drilled completely through its length. This hole is sealed with a metal of low melting point that flows away if a predetermined high temperature is reached. The initial use of the fusible plug was as a safety precaution against low water levels in steam engine boilers, but later applications extended its use to other closed vessels, such as air conditioning systems and tanks for transporting corrosive or liquefied petroleum gases.
Purpose
A fusible plug operates as a safety valve when dangerous temperatures, rather than dangerous pressures, are reached in a closed vessel. In steam boilers the fusible plug is screwed into the crown sheet (the top plate) of the firebox, typically extending about into the water space above it. Its purpose is to act as a last-resort safety device in the event of the water level falling dangerously low: when the top of the plug is out of the water it overheats, the low-melting-point core melts away and the resulting noisy release of steam into the firebox serves to warn the operators of the danger before the top of the firebox itself runs completely dry, which could result in catastrophic failure of the boiler. The temperature of the flue gases in a steam engine firebox can reach , at which temperature copper, from which historically most fireboxes were made, softens to a state which can no longer sustain the boiler pressure and a severe explosion will result if water is not put into the boiler quickly and the fire removed or extinguished. The hole through the plug is too small to have any great effect in reducing the steam pressure and the small amount of water, if any, that passes through it is not expected to have any great impact in quenching the fire.
History
The device was invented in 1803 by Richard Trevithick, the proponent of high-pressure (as opposed to atmospheric) steam engines, in consequence of an explosion in one of his new boilers. His detractors were eager to denounce the whole concept of high-pressure steam, but Trevithick proved that the accident happened because his fireman had neglected to keep the boiler full of water. He publicised his invention widely, without patent, to counter these criticisms.
Experiments
Experiments conducted by the Franklin Institute, Boston, in the 1830s had initially cast doubt on the practice of adding water as soon as the escape of steam through the device was noted. A steam boiler was fitted with a small observation window of glass and heated beyond its normal operating temperature with the water level below the top of the firebox. When water was added it was found that the pressure rose suddenly and the observation glass shattered. The report concluded that the high temperature of the metal had vaporised the added water too quickly and that an explosion was the inevitable result.
It was not until 1852 that this assumption was challenged: Thomas Redmond, one of the Institute's inspectors, specifically ruled out this theory in his investigation into the boiler explosion on the steamship Redstone on the Ohio River on 3 April that year. A 1907 investigation in Wales came to a similar conclusion: a steam locomotive belonging to the Rhymney Railway was inadvertently sent out with its safety valves wrongly assembled. The pressure in the boiler built up to the extent that the injectors failed; the crown sheet became uncovered, was weakened by the heat of the fire and violently blew apart. The investigation, led by Colonel Druitt of the Railway Inspectorate, dismissed the theory that the enginemen had succeeded in starting the injectors and that the sudden flood of cold water had caused such a generation of steam that the boiler burst. He quoted the results of experiments by the Manchester Steam Users' Association, a national boiler certification and insurance body, that proved that the weight of copper present (considered with its specific heat) was insufficient to generate enough steam to raise the boiler pressure at all. Indeed, the addition of cold water caused the pressure to fall. From then on it was accepted that the correct action in the event of the operation of the fusible plug was to add water.
Cored fusible plugs
The simple solid plug is filled with a slug of low-melting-point alloy. When this melts, it first melts as a narrow channel through the plug. Steam and water immediately begins to escape through this. As the water will have a maximum temperature of , lower than tin's melting point of 410 °F, this water jet may act to freeze the plug. While water continues to escape from the plug, the plug may fail to melt completely and so only a minor jet of steam is noticed, which may be overlooked.
To avoid this, the cored fusible plug was developed in the 1860s to give a wide opening as soon as the alloy softens. This has a solid brass or bronze centre, soldered into place by a thick layer of the low-melting-point alloy. When overheated, the plug does not release any steam or water until the alloy melts sufficiently to release the centre plug. The plug now fails dramatically, opening its entire bore immediately. This full-bore jet is then more likely to be noticed.
Un-noticed melted plugs
A drawback to the device was found on 7 March 1948, when the firebox crown sheet of Princess Alexandra, a Coronation Pacific of the London, Midland and Scottish Railway, failed while hauling a passenger train from Glasgow to London. Enquiries established that both water gauges were defective and on a journey earlier that day one or both of the fusible plugs had melted, but this had gone unnoticed by the engine crew because of the strong draught carrying the escaping steam away from them.
Maintenance
Alloy composition
Investigation showed the importance of the alloy on plug ageing. Alloys were initially favoured as they offered lower eutectic melting points than pure metals. It was found though that alloys aged poorly and could encourage the development of a matrix of oxides on the water surface of the plug, this matrix having a dangerously high melting point that made the plug inoperable. In 1888 the US Steamboat Inspection Service made a requirement that plugs were to be made of pure banca tin and replaced annually. This avoided lead and also zinc contamination. Zinc contamination was regarded as so serious a problem that the case of the plugs was also changed from brass (a copper-zinc alloy) to a zinc-free copper-tin bronze, to avoid the risk of zinc migrating from the housing into the alloy plug.
Plug ageing
In the 1920s, investigations by the U.S. Bureau of Standards, in conjunction with the Steamboat Inspection Service, found that in use encrustation and oxidation above the fusible core can increase the melting point of the device and prevent it from working when needed: melting points in excess of in used examples have been found. Typical current practice in locomotives requires new plugs to be inspected after "15 to 30 working days (dependent upon water condition and use of locomotive) or at
least once every six months," depending on the boiler operating pressure and temperature.
Other applications
The principle of the fusible plug is also applied to the transport of liquefied petroleum gases, where fusible plugs (or small, exposed patches of the containers' lining membrane) are designed to melt or become porous if too high a temperature is reached: a controlled release, at a typical temperature of , is preferable to an explosive release (a "BLEVE") at a higher temperature. Corrosive gas containers, such as those used for liquid chlorine, are fitted with one or more fusible plugs with an operating temperature of about .
Fusible plugs are common in aircraft wheels, typically in larger or high-performance aircraft. The very large thermal loads imposed by abnormal landing and braking conditions (such as a high-speed rejected takeoff, where an aircraft heavy with fuel must brake hard from a very high speed to a stop in a relatively short distance) can cause the already high pressure in the tyres to rise to the point that the tyre might burst, so fusible plugs are used as a relief mechanism. The vented gas may be directed to cool the braking surfaces.
Fusible plugs are sometimes fitted to the receivers of air compressors as a precaution against the ignition of any lubricating oil vapour that might be present. Should the action of the compressor heat the air above a safe temperature the core will melt and release the pressure.
Automobile air conditioning systems were commonly fitted with fusible plugs, operating at . However, due to concerns about the environmental effects of any released refrigerant gas, this function has been taken over by an electrical switch.
A patented type of fireproof safe uses a fusible plug to douse its contents with water if the external temperature gets too high; the patent was published in 1867.
Fusible plugs enhance the safety of liquid fluoride thorium reactors by preventing overheating of the reactor. In the event that a limit temperature is reached, a fusible plug placed at the bottom of the reactor melts, allowing the fluid reactor fuel to drain into underground storage tanks, preventing nuclear meltdown.
See also
Boiler explosion
References
Steam boiler components
Pressure vessels
Locomotive parts
Safety equipment
Steam locomotive technologies | Fusible plug | [
"Physics",
"Chemistry",
"Engineering"
] | 1,890 | [
"Structural engineering",
"Chemical equipment",
"Physical systems",
"Hydraulics",
"Pressure vessels"
] |
4,653,057 | https://en.wikipedia.org/wiki/Strehl%20ratio | The Strehl ratio is a measure of the quality of optical image formation, originally proposed by Karl Strehl, after whom the term is named. Used variously in situations where optical resolution is compromised due to lens aberrations or due to imaging through the turbulent atmosphere, the Strehl ratio has a value between 0 and 1, with a hypothetical, perfectly unaberrated optical system having a Strehl ratio of 1.
Mathematical definition
The Strehl ratio is frequently defined as the ratio of the peak aberrated image intensity from a point source compared to the maximum attainable intensity using an ideal optical system limited only by diffraction over the system's aperture. It is also often expressed in terms not of the peak intensity but the intensity at the image center (intersection of the optical axis with the focal plane) due to an on-axis source; in most important cases these definitions result in a very similar figure (or identical figure, when the point of peak intensity must be exactly at the center due to symmetry). Using the latter definition, the Strehl ratio can be computed in terms of the wavefront-error : the offset of the wavefront due to an on-axis point source, compared to that produced by an ideal focusing system over the aperture A(x,y). Using Fraunhofer diffraction theory, one computes the wave amplitude using the Fourier transform of the aberrated pupil function evaluated at 0,0 (center of the image plane) where the phase factors of the Fourier transform formula are reduced to unity. Since the Strehl ratio refers to intensity, it is found from the squared magnitude of that amplitude:
where i is the imaginary unit, is the phase error over the aperture at wavelength λ, and the average of the complex quantity inside the brackets is taken over the aperture A(x,y).
The Strehl ratio can be estimated using only the statistics of the phase deviation
, according to a formula rediscovered by Mahajan but known long before in antenna theory as the Ruze formula
where sigma (σ) is the root mean square deviation over the aperture of the wavefront phase:
.
The Airy disk
Due to diffraction, even a focusing system which is perfect according to geometrical optics will have a limited spatial resolution. In the usual case of a uniform circular aperture, the point spread function (PSF) which describes the image formed from an object with no spatial extent (a "point source"), is given by the Airy disk as illustrated here. For a circular aperture, the peak intensity found at the center of the Airy disk defines the point source image intensity required for a Strehl ratio of unity. An imperfect optical system using the same physical aperture will generally produce a broader PSF in which the peak intensity is reduced according to the factor given by the Strehl ratio. An optical system with only minor imperfections in this sense may be referred to as "diffraction limited" as its PSF closely resembles the Airy disk; a Strehl ratio of greater than .8 is frequently cited as a criterion for the use of that designation.
Note that for a given aperture the size of the Airy disk grows linearly with the wavelength , and consequently the peak intensity falls according to so that the reference point for unity Strehl ratio is changed. Typically, as wavelength is increased, an imperfect optical system will have a broader PSF with a decreased peak intensity. However the peak intensity of the reference Airy disk would have decreased even more at that longer wavelength, resulting in a better Strehl ratio at longer wavelengths (typically) even though the actual image resolution is poorer.
Usage
The ratio is commonly used to assess the quality of astronomical seeing in the presence of atmospheric turbulence and assess the performance of any adaptive optical correction system. It is also used for the selection of short exposure images in the lucky imaging
method.
In industry, the Strehl ratio has become a popular way to summarize the performance of an optical design because it gives the performance of a real system, of finite cost and complexity, relative to a theoretically perfect system, which would be infinitely expensive and complex to build and would still have a finite point spread function. It provides a simple method to decide whether a system with a Strehl ratio of, for example, 0.95 is good enough, or whether twice as much should be spent to try to get a Strehl ratio of perhaps 0.97 or 0.98.
Limitations
Characterizing the form of the point-spread function by a single number, as the Strehl Ratio does, will be meaningful and sensible only if the point-spread function is little distorted from its ideal (aberration-free) form, which will be true for a well-corrected system that operates close to the diffraction limit. That includes most telescopes and microscopes, but excludes most photographic systems, for example. The Strehl ratio has been linked via the work of André Maréchal to an aberration tolerancing theory which is very useful to designers of well-corrected optical systems, allowing a meaningful link between the aberrations of geometrical optics and the diffraction theory of physical optics. A significant shortcoming of the Strehl ratio as a method of image assessment is that, although it is relatively easy to calculate for an optical design prescription on paper, it is normally difficult to measure for a real optical system, not least because the theoretical maximum peak intensity is not readily available.
See also
Circle of confusion
Fraunhofer diffraction
Fraunhofer diffraction (mathematics)
Huygens–Fresnel principle
Optical transfer function
References
External links
Discussion page R.F. Royce' explanation of Strehl ratio in lay terms
Strehl meter W.M. Keck Observatory Strehl calculator page
Definition page Eric Weisstein's World of Physics
Strehl ratio Telescope Optics Net practical explanation of Strehl ratio for amateur telescope makers
Astronomical imaging
Optical quantities
Engineering ratios | Strehl ratio | [
"Physics",
"Mathematics",
"Engineering"
] | 1,230 | [
"Physical quantities",
"Metrics",
"Engineering ratios",
"Quantity",
"Optical quantities"
] |
18,414,743 | https://en.wikipedia.org/wiki/Sediment%20control | A sediment control is a practice or device designed to keep eroded soil on a construction site, so that it does not wash off and cause water pollution to a nearby stream, river, lake, or sea. Sediment controls are usually employed together with erosion controls, which are designed to prevent or minimize erosion and thus reduce the need for sediment controls. Sediment controls are generally designed to be temporary measures, however, some can be used for storm water management purposes.
Commonly used sediment controls
Check dam
Diversion dike (temporary)
Fiber rolls
Gabions
Gel Flocculant
Siltbusters
Sand bag barrier
Sediment basin
Sediment trap
Silt fence
Storm drain inlet protection
Straw bale barrier
Turbidity curtain
Active treatment systems
Treatment of silt impacted water using equipment and chemical addition, commonly called an active treatment system, is a relatively new form of sediment control for the construction industry. These systems are designed to reduce Total Suspended Solids (TSS) from entering nearby water bodies where silt pollution can be of environmental concern. Sediment-laden stormwater is collected and or pumped, and a chemical flocculant is added to aide in clarification. Types of flocculant include;
Natural Polymers: Derived from natural sources, such as starch, chitosan, and guar gum.
Synthetic Polymers: These include polyacrylamide and their derivatives, which can be tailored to specific water treatment needs. They are effective at lower dosages compared to inorganic flocculants.
Inorganic Flocculants: Aluminum Sulfate (Alum), Ferric Chloride, Ferric Sulfate, Polyaluminum chloride
Extreme caution should be observed when using cationic flocculants like chitosan or positively charged polyacrylamide or polyDADMAC which cause hypoxia in fish. The use of anionic,negatively charged, flocculants is best practice on open loop treatment systems to ensure the protection aquatic habitat, fish and invertebrates.
The water is then either filtered (sand or cartridge filter,) or settled (lamella clarifier or weir tank) prior to discharge. Chemical sediment control is currently used on some construction sites around the United States and Europe, typically larger sites where there is a high potential for damage to nearby streams. Another active treatment system design uses electrocoagulation to flocculate suspended particles in the stormwater, followed by a filtration stage. Active treatment systems require technical expertise to operate effectively as multiple types of equipment are utilized.
Passive treatment systems
Chemical treatment of water to remove sediment may also be accomplished passively. Passive treatment systems use the energy of water flowing by gravity through ditches, canals, culverts or other constructed conveyances to effect treatment. Self dosing products, such as Gel Flocculants, are placed in the flowing water where sediment particles, colloids and flow energy combine to release the required dosage, thereby creating heavy flocs which can then be easily filtered or settled. Natural woven fibers like jute are often used in ditch bottoms to act as filtration media. Silt retention mats can also be placed insitu to capture floccules. Sedimentation ponds are often utilized as a deposition area to clarify the water and concentrate the material. Mining, heavy construction and other industries have used passive systems for more than twenty years. These types of systems are low carbon as no external power source is needed, they require little skill to operate, minimal maintenance and are effective at reducing Total Suspended Solids, some heavy metals and the nutrient phosphorus.
Stormwater treatment can also be achieved passively. Stormwater management facilities (SWMF's) are generally designed Stokes' law to remove particulate matter larger than 40 micron in size, or to detain water to reduce downstream flooding. However, regulation on the effluent from SWMF's is becoming more stringent, as the detrimental impact from nutrients like Phosphorus either dissolved from (fertilizers), or bound to sediment particles from construction or agriculture runoff, cause algae and toxic cyanobacteria (aka Blue-green algae) blooms in receiving lakes. Cyanotoxin is of particular concern as many drinking water treatment plants can not effectively remove this toxin. In a recent municipal stormwater treatment study, an advanced sedimentation technology was used passively in large diameter stormwater mains upstream of SWMF's to remove an average of 90% of TSS and phosphorus during a near 50 year rain event.
Regulatory requirements
All states in the U.S. have laws requiring installation of erosion and sediment controls (ESCs) on construction sites of a specified size. Federal regulations require ESCs on sites and larger. Smaller sites which are part of a common plan of development (e.g. a residential subdivision) are also required to have ESCs. In some states, non-contiguous sites under are also required to have ESCs. For example, the State of Maryland requires ESCs on sites of or more. The sediment controls must be installed before the beginning of land disturbance (i.e. land clearing, grubbing and grading) and must be maintained during the entire disturbance phase of construction. Approval for use of any chemical flocculant must be obtained prior to its deployment.
See also
Certified Professional in Erosion and Sediment Control (CPESC)
Geotechnical engineering
Geotextile (material used in erosion & sediment controls)
Nonpoint source pollution
Stormwater
Universal Soil Loss Equation
References
External links
Erosion Control - a trade magazine for the erosion control and construction industries
International Erosion Control Association - Professional Association, Publications, Training
“Developing Your Stormwater Pollution Prevention Plan: A Guide for Construction Sites.” - U.S. EPA
Gel Flocculant - Type of semi aqueous flocculant used in Passive water treatment systems
Construction
Environmental soil science
Earthworks (engineering)
Gardening aids
Stormwater management | Sediment control | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,199 | [
"Water treatment",
"Stormwater management",
"Water pollution",
"Construction",
"Environmental soil science"
] |
18,416,144 | https://en.wikipedia.org/wiki/Neil%20F.%20Johnson | Neil Fraser Johnson (born 1961) is an English physicist who is notable for his work in complexity theory and complex systems, spanning quantum information, econophysics, and condensed matter physics. He is currently Professor of Physics at George Washington University in Washington D.C. where he heads up a new initiative in Complexity and Data Science which combines cross-disciplinary fundamental research with data science, with a view to resolving complex real-world problems.
He is a Fellow of the American Physical Society (APS) and is the recipient of the 2018 Burton Award from the APS.
He presented the Royal Institution Christmas Lectures "Arrows of time" on BBC TV in 1999. He has more than 300 published research papers across a wide variety of research topics and has supervised the doctoral theses of more than 25 students. He is also notable for his books Financial Market Complexity published by Oxford University Press and Simply Complexity: A Clear Guide to Complexity Theory published by Oneworld Publications, and for his research on the many-body dynamics of insurgent conflict and online extremism.
Education and career
He attended Southend High School for Boys in Southend-on-Sea, Essex, UK. He received his BA/MA from St. John's College, Cambridge, University of Cambridge where he was elected as a Scholar throughout his undergraduate career. He obtained a First each year, and obtained top First in the final examinations. He was awarded the Hartree and Maxwell prizes. He was awarded a scholarship to attend Harvard University as a Kennedy Scholar where he received his PhD in 1989.
Following his PhD, he was first appointed as a Research Fellow at the University of Cambridge, then as a Professor at the Universidad de Los Andes, Bogota. He was then Professor of Physics at the University of Oxford until 2007, having joined the faculty in 1992. After a period as Professor of Physics at the University of Miami in Florida, he was appointed Professor of Physics at George Washington University in 2018.
While a student at school and university, Johnson was a sax player with the National Youth Jazz Orchestra (NYJO) in the U.K. and toured extensively with them. He appears on a number of commercial recordings with NYJO and with other artists as a session musician.
Selected publications
References
External links
Neil Johnson interview
Johnson lecture on complexity
Website at GW
1961 births
Living people
Alumni of St John's College, Cambridge
English physicists
Harvard University alumni
Quantum physicists
Probability theorists
University of Miami faculty
Fellows of the American Physical Society | Neil F. Johnson | [
"Physics"
] | 497 | [
"Quantum physicists",
"Quantum mechanics"
] |
19,416,576 | https://en.wikipedia.org/wiki/Croatian%20Register%20of%20Shipping | Croatian Register of Shipping (), also known as CRS, is an independent classification society established in 1949. It is a non-profit organisation working on the marine market, developing technical rules and supervising their implementation, managing risk and performing surveys on ships. The Society's head office is in Split.
Croatian Register of Shipping is the member of the International Association of Classification Societies (IACS) since May 2011.
The register is officially recognized by the Malta Maritime Authority.
Historical record
CRS is a heritor of ship classification activities at the eastern Adriatic coast.
The Austrian Veritas was founded in this area, already in 1858, as the third classification society in the world.
In 1918 the Austrian Veritas changed its name into the Adriatic Veritas and was acting as such till year 1921.
CRS, acting till 1992 as JR (Yugoslav Register of Shipping), was founded in 1949.
CRS Head Office is situated in Split, Republic of Croatia.
CRS was associated member of International Association of Classification Societies (IACS) from April 1973 till 2004, and from May 2011 CRS gained the status of IACS Member
CRS is the recognised classification society (RO) pursuant to the requirements of the Regulation (EC) No. 391/2009 of the European Parliament and of the Council on common rules and standards for ship inspection and survey organisations.
CRS is the conformity assessment notified body notified under provisions of the Council Directive 94/25/EC relating to recreational craft, as amended by Directive 2003/44/EC.
CRS is the conformity assessments notified body notified under provisions of the Council Directive 96/98/EC on marine equipment, as amended.
CRS is the conformity assessments notified body notified under provisions of the Council Directive 97/23/EC (PED) on pressure equipment.
CRS is the conformity assessments notified body notified under provisions of the Council Directive 2009/105/EC (SPVD) on simple pressure vessels.
CRS is certified by British Standards Institution (BSI) confirming that CRS operates the Quality Management System which complies with the requirements of BS EN 9001:2008 for the scope of classification and statutory certification of ships, statutory certification of marine equipment and recreational crafts, and BSI Annual Statement of Compliance confirming that CRS Quality Management System complies with IACS Quality System Certification Scheme.
Status
CRS is an independent, not for profit but common welfare oriented, public foundation performing:
classification of ships;
statutory certification of ships on behalf of the national Maritime Administrations;
statutory certification of recreational crafts;
certification of materials and products;
conformity assessment of marine equipment;
conformity assessment of recreational crafts;
certification / registration of quality management systems.
The present status of CRS is defined by the Law on Croatian Register of Shipping (OFFICIAL GAZETTE No. 1996/81, as amended by OFFICIAL GAZETTE No. 2013/76) and Charter of CRS.
Mission
CRS mission in the field of classification and statutory certification is to promote the highest internationally adopted standards in the safety of life and property at sea and inland waterways, as well as in the protection of the sea and inland waterways environment.
Certification / Accreditation
From July 2005 CRS is in possession of the Certificate issued by British Standards Institution (BSI) which is certifying that CRS operate the Quality Management System which complies with the requirements of BS EN 9001:2000 for the scope of classification and statutory certification of ships, statutory certification of marine equipment and recreational crafts
From February 2011 CRS is in possession of the “BSI Annual Statement of Compliance confirming Croatian Register of Shipping Compliance with IACS Quality System Certification Scheme.
References
External links
Water transport in Croatia
Ship classification societies
Organizations based in Split, Croatia
Ship registration
1949 establishments in Croatia | Croatian Register of Shipping | [
"Engineering"
] | 759 | [
"Marine engineering organizations",
"Ship classification societies"
] |
6,091,800 | https://en.wikipedia.org/wiki/C40H56 | {{DISPLAYTITLE:C40H56}}
The molecular formula C40H56 (molar mass: 536.87 g/mol) may refer to:
Carotenes
α-Carotene
β-Carotene
γ-Carotene
δ-Carotene
ε-Carotene
Lycopene
Molecular formulas | C40H56 | [
"Physics",
"Chemistry"
] | 73 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
240,568 | https://en.wikipedia.org/wiki/Stress%E2%80%93strain%20curve | In engineering and materials science, a stress–strain curve for a material gives the relationship between stress and strain. It is obtained by gradually applying load to a test coupon and measuring the deformation, from which the stress and strain can be determined (see tensile testing). These curves reveal many of the properties of a material, such as the Young's modulus, the yield strength and the ultimate tensile strength.
Definition
Generally speaking, curves that represent the relationship between stress and strain in any form of deformation can be regarded as stress–strain curves. The stress and strain can be normal, shear, or a mixture, and can also be uniaxial, biaxial, or multiaxial, and can even change with time. The form of deformation can be compression, stretching, torsion, rotation, and so on. If not mentioned otherwise, stress–strain curve typically refers to the relationship between axial normal stress and axial normal strain of materials measured in a tension test.
Stages
A schematic diagram for the stress–strain curve of low carbon steel at room temperature is shown in figure 1. There are several stages showing different behaviors, which suggests different mechanical properties. To clarify, materials can miss one or more stages shown in figure 1, or have totally different stages.
Linear elastic region
The first stage is the linear elastic region. The stress is proportional to the strain, that is, obeys the general Hooke's law, and the slope is Young's modulus. In this region, the material undergoes only elastic deformation. The end of the stage is the initiation point of plastic deformation. The stress component of this point is defined as yield strength (or upper yield point, UYP for short).
Strain hardening region
The second stage is the strain hardening region. This region starts as the stress goes beyond the yielding point, reaching a maximum at the ultimate strength point, which is the maximal stress that can be sustained and is called the ultimate tensile strength (UTS). In this region, the stress mainly increases as the material elongates, except that for some materials such as steel, there is a nearly flat region at the beginning. The stress of the flat region is defined as the lower yield point (LYP) and results from the formation and propagation of Lüders bands. Explicitly, heterogeneous plastic deformation forms bands at the upper yield strength and these bands carrying with deformation spread along the sample at the lower yield strength. After the sample is again uniformly deformed, the increase of stress with the progress of extension results from work strengthening, that is, dense dislocations induced by plastic deformation hampers the further motion of dislocations. To overcome these obstacles, a higher resolved shear stress should be applied. As the strain accumulates, work strengthening gets reinforced, until the stress reaches the ultimate tensile strength.
Necking region
The third stage is the necking region. Beyond tensile strength, a necking forms where the local cross-sectional area becomes significantly smaller than the average. The necking deformation is heterogeneous and will reinforce itself as the stress concentrates more at the reduced section. Such positive feedback leads to quick development of necking and leads to fracture. Note that though the pulling force is decreasing, the work strengthening is still progressing, that is, the true stress keeps growing but the engineering stress decreases because the shrinking section area is not considered. This region ends up with the fracture. After fracture, percent elongation and reduction in section area can be calculated.
Classification
Some common characteristics among the stress–strain curves can be distinguished with various groups of materials and, on this basis, to divide materials into two broad categories; namely, the ductile materials and the brittle materials.
Ductile materials
Ductile materials, including structural steel and many other metals, are characterized by their ability to yield at normal temperatures. For example, low-carbon steel generally exhibits a very linear stress–strain relationship up to a well-defined yield point. The linear portion of the curve is the elastic region, and the slope of this region is the modulus of elasticity or Young's modulus. Plastic flow initiates at the upper yield point and continues at the lower yield point.
The appearance of the upper yield point is associated with the pinning of dislocations in the system. Permanent deformation occurs once dislocations are forced to move past pinning points. Initially, this permanent deformation is non-uniformly distributed along the sample. During this process, dislocations escape from Cottrell atmospheres within the material. The resulting slip bands appear at the lower yield point and propagate along the gauge length, at constant stress, until the Lüders strain is reached, and deformation becomes uniform.
Beyond the Lüders strain, the stress increases due to strain hardening until it reaches the ultimate tensile stress. During this stage, the cross-sectional area decreases uniformly along the gauge length, due to the incompressibility of plastic flow (not because of the Poisson effect, which is an elastic phenomenon). Then a process of necking begins, which ends in a 'cup and cone' fracture characteristic of ductile materials.
The appearance of necking in ductile materials is associated with geometrical instability in the system. Due to the natural inhomogeneity of the material, it is common to find some regions with small inclusions or porosity, within the material or on its surface, where strain will concentrate, leading to a local reduction in cross-sectional area. For strain less than the ultimate tensile strain, the increase of work-hardening rate in this region will be greater than the area reduction rate, thereby make this region harder to deform than others, so that the instability will be removed, i.e. the material increases in homogeneity before reaching the ultimate strain. However, beyond this, the work hardening rate will decrease, such that a region with smaller area is weaker than nearby regions, therefore reduction in area will concentrate in this region and the neck will become more and more pronounced until fracture. After the neck has formed in the material, further plastic deformation is concentrated in the neck while the remainder of the material undergoes elastic contraction owing to the decrease in tensile force.
The stress–strain curve for a ductile material can be approximated using the Ramberg–Osgood equation. This equation is straightforward to implement, and only requires the material's yield strength, ultimate strength, elastic modulus, and percent elongation.
Toughness
Materials that are both strong and ductile are classified as tough. Toughness is a material property defined as the area under the stress-strain curve.
Toughness can be determined by integrating the stress-strain curve. It is the energy of mechanical deformation per unit volume prior to fracture. The explicit mathematical description is:where
is strain
is the strain upon failure
is stress
Brittle materials
Brittle materials, which include cast iron, glass, and stone, are characterized by the fact that rupture occurs without any noticeable prior change in the rate of elongation, sometimes they fracture before yielding.
Brittle materials such as concrete or carbon fiber do not have a well-defined yield point, and do not strain-harden. Therefore, the ultimate strength and breaking strength are the same. Typical brittle materials like glass do not show any plastic deformation but fail while the deformation is elastic. One of the characteristics of a brittle failure is that the two broken parts can be reassembled to produce the same shape as the original component as there will not be a neck formation like in the case of ductile materials. A typical stress–strain curve for a brittle material will be linear. For some materials, such as concrete, tensile strength is negligible compared to the compressive strength and it is assumed to be zero for many engineering applications. Glass fibers have a tensile strength greater than that of steel, but bulk glass usually does not. This is because of the stress intensity factor associated with defects in the material. As the size of the sample gets larger, the expected size of the largest defect also grows.
See also
Elastomers
Plane strain compression test
Strength of materials
Stress–strain index
Tensometer
Universal testing machine
References
Elasticity (physics)
Structural analysis | Stress–strain curve | [
"Physics",
"Materials_science",
"Engineering"
] | 1,683 | [
"Structural engineering",
"Physical phenomena",
"Elasticity (physics)",
"Deformation (mechanics)",
"Structural analysis",
"Mechanical engineering",
"Aerospace engineering",
"Physical properties"
] |
240,693 | https://en.wikipedia.org/wiki/Virtual%20microscope%20project | The Virtual Microscope project is an initiative to make micromorphology and behavior of some small organisms available online. Images are from Antarctica and the Baltic Sea are available at no cost. Images are offered in higher magnification or lower resolution. Varieties of images offer can include scanning electron microscopy, transmission electron microscopy, and are accompanied by related publications for research. The site interface is deliberately kept simple with tutorials offered in several areas. The editorial board consists of professors from several universities worldwide. Its global scope was added after its foundation, and it supervised by Rutgers University.
References
External links
ecoSCOPE
Microscopes | Virtual microscope project | [
"Chemistry",
"Technology",
"Engineering"
] | 123 | [
"Microscopes",
"Measuring instruments",
"Microscopy"
] |
240,843 | https://en.wikipedia.org/wiki/Tumor%20necrosis%20factor | Tumor necrosis factor (TNF), formerly known as TNF-α, is a chemical messenger produced by the immune system that induces inflammation. TNF is produced primarily by activated macrophages, and induces inflammation by binding to its receptors on other cells. It is a member of the tumor necrosis factor superfamily, a family of transmembrane proteins that are cytokines, chemical messengers of the immune system. Excessive production of TNF plays a critical role in several inflammatory diseases, and TNF-blocking drugs are often employed to treat these diseases.
TNF is produced primarily by macrophages but is also produced in several other cell types, such as T cells, B cells, dendritic cells, and mast cells. It is produced rapidly in response to pathogens, cytokines, and environmental stressors. TNF is initially produced as a type II transmembrane protein (tmTNF), which is then cleaved by TNF alpha converting enzyme (TACE) into a soluble form (sTNF) and secreted from the cell. Three TNF molecules assemble together to form an active homotrimer, whereas individual TNF molecules are inert.
When TNF binds to its receptors, tumor necrosis factor receptor 1 (TNFR1) and tumor necrosis factor receptor 2 (TNFR2), a pathway of signals is triggered within the target cell, resulting in an inflammatory response. sTNF can only activate TNFR1, whereas tmTNF can activate both TNFR1 and TNFR2, as well as trigger inflammatory signaling pathways within its own cell. TNF's effects on the immune system include the activation of white blood cells, blood coagulation, secretion of cytokines, and fever. TNF also contributes to homeostasis in the central nervous system.
Inflammatory diseases such as rheumatoid arthritis, psoriasis, and inflammatory bowel disease can be effectively treated by drugs that inhibit TNF from binding to its receptors. TNF is also implicated in the pathology of other diseases including cancer, liver fibrosis, and Alzheimer's, although TNF inhibition has yet to show definitive benefits.
History
In the 1890s, William Coley observed that acute infections could cause tumor regression, leading to his usage of bacterial toxins as a cancer treatment. In 1944, endotoxin was isolated from Coley's bacterial toxins as the substance responsible for the anticancer effect. In particular, endotoxin could cause tumor regression when injected into mice with experimentally induced cancers. In 1975, Carswell et al. discovered that endotoxin did not directly cause tumor regression, but instead induced macrophages to secrete a substance that causes tumors to hemorrhage and necrotize, termed "tumor necrosis factor."
In the 1980s, TNF was purified, sequenced, and cloned in bacteria. Studies on recombinant TNF confirmed the anticancer potential of TNF, but this optimism faded when TNF injections were found to induce endotoxin shock. TNF was also discovered to be the same protein as cachectin, known to cause muscle wasting in mice. These findings demonstrated that TNF could be detrimental in excessive quantities. In 1992, TNF antibodies were found to reduce joint inflammation in mice, revealing TNF's role in inflammatory diseases. This led to the approval of the first anti-TNF therapy for rheumatoid arthritis in 1998.
Nomenclature
In 1985, TNF was found to have significant sequential and functional similarity with lymphotoxin, a previously discovered cytokine. This led to the renaming of TNF to TNF-α and lymphotoxin to TNF-β. However, in 1993, a protein with close similarity to lymphotoxin was discovered, termed lymphotoxin-β. In 1998, at the Seventh International TNF Congress, TNF-β was officially renamed to lymphotoxin-α, while TNF-α was renamed back to TNF. Nevertheless, some papers continue to use the term TNF-α.
Evolution
The TNF and lymphotoxin-α genes are believed to be descended from a common ancestor gene that developed early in vertebrate evolution, before the Agnatha and Gnathostomata split. This ancestor gene was dropped from the Agnatha ancestor but persisted in the Gnathostomata ancestor. During the evolution of gnathostomes, this ancestor gene was duplicated into the TNF and lymphotoxin-α genes. Thus, while the ancestor gene is found across a variety of gnathostome species, only a subset of gnathostome species contain a TNF gene. Some fish species, such as Danio, have been found to contain duplicates of the TNF gene.
The TNF gene is very similar among mammals, ranging from 233 to 235 amino acids. The TNF proximal promoter region is also highly conserved among mammals, and nearly identical among higher primates. The similarity of the TNF gene among fish is lower, ranging from 226 to 256 amino acids. Like mammalian TNF, the fish TNF gene has been shown to be stimulated in macrophages by antigens. All TNF genes have a highly conserved C-terminal module known as the TNF homology domain, due to its important role in binding TNF to its receptors.
Gene
Location
The human TNF gene is mapped to chromosome 6p21.3, residing in the class III region of the major histocompatibility complex, where many immune system genes are contained. The class III region is sandwiched between the HLA-DR locus on the centromeric side, and the HLA-B locus on the telomeric side. The TNF gene is 250 kilobases away from the HLA-B locus, and 850 kilobases away from the HLA-DR locus. The TNF gene is located 1,100 kilobases downstream of the lymphotoxin-α gene.
Expression
TNF is produced rapidly in response to many stimuli by multiple cell types. Cell types that express TNF include T cells, B cells, macrophages, mast cells, dendritic cells, and fibroblasts, and stimuli that activate the TNF gene include pathogenic substances, cytokines from other immune cells, and environment stressors. A few such cytokines include interleukin-1, interleukin-2, interferon-γ, and TNF itself. TNF transcription is activated by a variety of signaling pathways and transcription factors, depending on the cell type and stimulus. TNF transcription does not depend on the synthesis of new proteins, enabling rapid activation of the gene.
TNF gene expression is regulated by a proximal promoter region consisting of approximately 200 base pairs. Most of the binding sites within the proximal promoter region can recognize multiple transcription factors, enabling TNF to be activated by a variety of signaling pathways. As transcription factors bind to the promoter region, they also bind to coactivators, assembling into a large structure known as an enhanceosome. The composition of the enhanceosome depends on ambient factors within the cell, particularly nuclear factor of activated T-cells (NFAT).
TNF expression is also regulated by DNA structure. DNA is coiled around histones, which is loosened by acetylation and condensed by methylation. Proteins that acetylate histones at the TNF promoter, particularly CREB-binding protein in T cells, are often critical for TNF expression. In contrast, several cell types that do not express TNF are highly methylated at the histones of the TNF promoter. Long-range intrachromosomal interactions can also regulate TNF expression. In activated T-cells, the DNA surrounding the TNF promoter circularizes, bringing promoter complexes closer together and enhancing transcription efficiency.
Transcription
The transcribed region contains 4 exons separated by 3 introns, for a total of 2,762 base pairs in the primary transcript and 1,669 base pairs in the mRNA. The mRNA consists of four regions: the 5' untranslated region, which is not included in the TNF protein; the transmembrane portion, which is present in transmembrane TNF but not in soluble TNF; the soluble portion; and the 3' untranslated region. More than 80% of the soluble portion is contained in the last exon, while the transmembrane portion is contained in the first two exons. The 3' untranslated region contains an AU-rich element (ARE) that regulates the translation of TNF. In unstimulated macrophages, various proteins bind to the ARE to destabilize TNF mRNA, suppressing the translation of TNF. Upon activation, TNF translation is unsuppressed.
Protein
TNF is initially produced as a transmembrane protein (tmTNF) consisting of 233 amino acids. tmTNF binds to both TNFR1 and TNFR2, but its activity is primarily mediated by TNFR2. Upon binding to a receptor, tmTNF also activates signaling pathways within its own cell. tmTNF is cleaved by TNF alpha converting enzyme (TACE), which causes the extracellular portion to be secreted. After cleavage, the remaining tmTNF is cleaved again by SPPL2B, causing the intracellular portion to translocate to the nucleus. There, it is believed to regulate cytokine production, such as triggering the expression of interleukin-12.
The secreted extracellular portion, denoted sTNF, consists of 157 amino acids. Unlike tmTNF, sTNF can only bind to TNFR1. The secondary structure of sTNF consists primarily of alternating strands that join into two sheets, known as antiparallel β-sheets. The two sheets are layered on top of each other, forming a wedge shape known as an antiparallel β-sandwich. Remarkably, this structure is similar to those seen on the coats of viruses. The last 9 residues of the C-terminus are locked into the middle strand of the bottom sheet, and are necessary for bioactivity.
Both tmTNF and sTNF are only bioactive as homotrimers, whereas individual monomers are inactive. The rate at which TNF trimers disassemble is constant, whereas the rate at which TNF trimers assemble increases with TNF concentration. This causes TNF to be mostly trimers at high concentrations, whereas TNF is mostly monomers and dimers at low concentrations. The coexistence of TNF dimers and trimers in dynamic equilibrium suggests that TNF might be a morpheein. Small molecules that stabilize TNF dimers and prevent the assembly of TNF trimers present a potential mechanism for inhibiting TNF.
Function
TNF is a central mediator of the body's innate immune response. By binding to receptors TNFR1 and TNFR2, TNF can induce either cell survival or cell death in a target cell. The cell survival response includes cell proliferation and the activation of inflammatory signals, while the cell death response can either be apoptosis, the controlled death of the cell, or necroptosis, a less controlled death causing inflammation and interference in surrounding tissue. TNF induces cell survival by default, but cell death can be induced by factors such as disruption of inflammatory pathways by pathogens, co-stimulation with other cytokines, and cross-talk between TNFR1 and TNFR2. Additionally, transmembrane TNF (tmTNF) acts as a reverse signaler, triggering a variety of responses in its own cell depending on cell type and stimulant.
TNFR1 signaling
TNFR1 exists in most cell types and binds to both tmTNF and sTNF. TNFR1 contains a death domain in its cytoplasmic tail, enabling it to trigger cell death. Whether TNFR1 activation triggers cell survival or cell death is mediated by the formation of protein complexes: complex I, which leads to cell survival, and complex II, which leads to cell death. By default, TNFR1 activation triggers cell proliferation and inflammation rather than cell death. These inflammatory pathways contain three cell death checkpoints, each of which is critical in preventing cell death.
Upon activation by TNF, TNFR1 trimerizes and forms complex I by recruiting RIPK1 and TRADD, which recruits TRAF2, cIAP1 and cIAP2, and LUBAC. cIAP1 and cIAP2 are ubiquitin ligases that form K63-linked ubiquitin chains, which recruit TAK1 via TAB2 and TAB3. LUBAC is also a ubiquitin ligase that forms M1-linked ubiquitin chains, which attract IKK via NEMO. TAK1 activates the MAPK pathways, as well as IKK, which in turn activates the canonical NF-κB pathway. The MAPK pathways and the NF-κB pathway activate multiple transcription factors in the nucleus, which result in cell survival, proliferation, and inflammatory response. Complex I is negatively regulated by deubiquitinases such as A20, CYLD, and OTULIN, which destabilize complex I.
Complex II is formed when RIPK1 and/or TRADD disassociate from complex I and bind with FADD to activate caspase 8, leading to cell death. Complex IIa includes TRADD and can activate caspase 8 without RIPK1, while complex IIb does not include TRADD, so it is dependent on RIPK1 for the activation of caspase 8. The pathways of complex I induce three checkpoints that prevent complex II from inducing cell death.
In the first checkpoint, IKK disables RIPK1 via phosphorylation while it is attached to complex I. This disables complex IIb, which is dependent on RIPK1. Since IKK is dependent on the ubiquitination of complex I, conditions that affect ubiquitination, such as inhibition of cIAP1/2 and LUBAC, mutation of the RIPK1 ubiquitin acceptor site, or deficiencies of A20 and OUTLIN, can disable this checkpoint. The disabling of the IKK checkpoint activates complex IIb, leading to apoptosis, or pyroptosis by cleaving GSDMD. The disabling of the IKK checkpoint can also indirectly activate complex IIa by disabling the NF-κB pathway, which controls the second checkpoint.
In the second checkpoint, the NF-κB pathway promotes the expression of pro-survival genes such as FLIP, which counteracts the activation of caspase 8 in complex IIa. This checkpoint can be disabled by translation inhibitors such as cycloheximide, as well as by the disabling of the IKK complex, which controls the NF-κB pathway. The disabling of this checkpoint activates complex IIa, leading to apoptosis.
In the third checkpoint, non-lethal caspase 8 is activated by TNFR1 signalling, which binds to complex IIb and cleaves RIPK1, disabling it. It is unknown why this form of caspase 8 does not cause cell death. The disabling of this checkpoint, via inactivation of caspase 8, causes RIPK1 from complex IIb to bind to RIPK3 and MLKL, forming complex IIc, also referred to as the necrosome. The necrosome then causes necroptosis.
TNFR2 signaling
Unlike TNFR1, TNFR2 is expressed in limited cell types, including endothelial cells, fibroblasts, and subsets of neurons and immune cells. TNFR2 is only fully activated by tmTNF, while activation by sTNF is partially inhibited. Unlike TNFR1, TNFR2 does not possess a death domain, so it is incapable of directly inducing cell death. Thus, TNFR2 activation most often leads to cell survival. Cell survival can either lead to an inflammatory response, via canonical NF-κB activation, or cell proliferation, via non-canonical NF-κB activation, depending on intracellular conditions and the signaling process of TNFR1. TNFR2 can also indirectly cause cell death by disrupting the cell death checkpoints of TNFR1.
Upon binding to tmTNF, TNFR2 trimerizes and directly recruits TRAF2, as well as TRAF1 or TRAF3. TRAF2 is central to the TNFR2 signaling complex and recruits cIAP1/2. If there is an accumulation of NIK within the cell, TRAF2/3 and cIAP1/2 may be formed as a complex with inactive NIK. When TRAF2/3 binds to TNFR2, the attached NIK is activated, which in turn activates IKKα. This allows p100 and RelB to be processed into a heterodimer which activates the non-canonical NF-κB pathway, leading to cell proliferation. The expression of p100 and RelB is potentiated by the activation of the canonical NF-κB pathway by TNFR1. Thus, TNFR2 non-canonical NF-κB activation is dependent on the canonical NF-κB activation by TNFR1, as well as the accumulation of NIK within the cell.
TNFR2 can also activate the canonical NF-κB pathway, though this is less common than non-canonical NF-κB activation. The details of TNFR2's activation of the canonical NF-κB pathway are unknown. Presumably, TAK1 and IKK are recruited by the TRAF2 / TRAF1/3 / cIAP1/2 signalling complex, which in turn activates the canonical NF-κB pathway.
TNFR2 can indirectly induce cell death by degrading cIAP1/2 as part of the non-canonical NF-κB pathway. The degradation of cIAP1/2 affects the ubiquitination of the TNFR1 signalling complex, which inhibits the function of IKK. This disables the IKK cell death checkpoint in TNFR1, inducing cell death.
Reverse signalling
tmTNF can act as a receptor, activating pathways within its own cell upon binding to TNFR1 or TNFR2. tmTNF reverse signalling can induce apoptosis, apoptosis resistance, inflammation, or inflammation resistance depending on the ligand and cell type.
In tumor cells, such as B lymphoma cells, tmTNF reverse signalling has been shown to increase NF-κB activity, enhancing cell survival and apoptosis resistance. In natural killer cells, tmTNF reverse signalling increases cytotoxic activity by increasing the expression of perforin, granzyme B, Fas ligand, and TNF. In T cells, the activation of the JNK pathway by tmTNF reverse signalling can lead to cell cycle inhibition and apoptosis.
In monocytes, tmTNF has been shown to play a dual role in mediating the monocyte's inflammatory response to sTNF. If tmTNF reverse signalling occurs before a monocyte is activated by sTNF, then the monocyte's inflammatory response to sTNF is enhanced. If tmTNF reverse signalling occurs after a monocyte is activated by sTNF, then the inflammatory response is reduced. Meanwhile, tmTNF reverse signalling reduces a monocyte's inflammatory response to endotoxin. This effect is caused by tmTNF activating the JNK and p38 pathways, which induces TGF-β production, which then interferes with the signalling pathway of endotoxin.
Immune response
The innate immune system is the immune system's first line of defense, responding rapidly and nonspecifically to invading pathogens. It is activated when pathogen-associated molecular patterns (PAMPs), such as endotoxins and double-stranded viral RNA, bind to the pattern recognition receptors (PRRs) of immune cells, causing them to secrete immune-regulating cytokines. These cytokines, such as IL-1, IL-6, IL-8, and TNF, are primarily secreted by immune cells that engulf bacteria, such as macrophages and dendritic cells. They mainly act on white blood cells, as well as on endothelial cells in blood vessels to promote an early inflammatory response.
TNF is the principal cytokine for regulating acute inflammation, though many of its functions are shared with other cytokines, especially IL-1. By binding to TNF receptors, TNF can perform functions including stimulating endothelial cells to induce coagulation, which obstructs blood flow to prevent the spread of microbes; stimulating endothelial cells and macrophages to secrete chemokines that attract white blood cells; stimulating the secretion of other cytokines such as IL-1; activating neutrophils and macrophages; stimulating the liver to produce acute phase proteins, such as C-reactive protein; inducing catabolism of muscles and fat to produce energy; and stimulating scar tissue formation, also known as fibrosis. In addition to inducing the secretion of cytokines, TNF itself can be induced by cytokines, enabling a cascade of inflammatory signals. Excessive amounts of TNF can cause septic shock.
Much of TNF's functions are mediated through inflammatory signalling pathways, such as MAPK and NF-κB. Many pathogens attempt to prevent an immune response by hijacking cells and disrupting their inflammatory pathways. In response to this, the TNFR1 signalling pathway has cell death pathways that are inhibited by the activities of the inflammatory pathways. If a cell's inflammatory pathways are disrupted, the cell death pathways are uninhibited, triggering cell death. This prevents the pathogen from replicating within the cell, as well as alerting the immune system.
Additionally, TNF induces fever to help the body fight infections. TNF can induce fever by triggering the release of cytokines interleukin-1 and interleukin-6, or through other mediators like PLA2. TNF or its mediators can reach the hypothalamus either through circulation in the bloodstream or through secretion by macrophages and endothelial cells near the hypothalamus. TNF can also induce fever by stimulating the primary vagal terminals in the liver, which signals to neurons to secrete norepinephrine. All of these pathways culminate in the synthesis of prostaglandins, which interact with the OVLT in the hypothalamus to raise the target temperature of the body.
Central nervous system
TNF is expressed in various cells in the central nervous system, including glial cells, microglia, astrocytes, and neurons, and plays a critical role in maintaining homeostasis.
Through TNFR1 signalling, TNF can increase the surface expression of AMPA receptors and NDMA receptors in neurons, strengthening synaptic transmission. TNF also decreases the surface expression of GABAA receptors, reducing the activity of inhibitory synapses. TNF can also modulate the release of glutamate, an excitatory neurotransmitter, and S100B, a zinc-binding protein, by astrocytes. The modulation of excitation and inhibition of neurons by TNF indicates that TNF plays a role in synaptic scaling and plasticity.
Through TNFR2 signalling, TNF promotes the proliferation and maturation of oligodendrocytes, which produce protective myelin sheaths around nerve cells. On the other hand, TNF becomes cytotoxic to oligodendrocyte progenitor cells when the cells are in contact with astrocytes.
Clinical significance
Autoimmunity
Excessive production of TNF plays a key role in the pathology of autoimmune diseases, such as rheumatoid arthritis, inflammatory bowel disease, psoriatic arthritis, psoriasis, and noninfectious uveitis. In these diseases, TNF is erroneously secreted by immune cells in response to environmental factors or genetic mutations. TNF then triggers an inflammatory response, damaging normal tissue. TNF blockers, which prevent TNF from binding to its receptors, are often used to treat these diseases.
TNF induces inflammation both by activating inflammatory pathways, as well as by triggering cell death. Cell death triggers inflammation by exposing the components of dying cells to neighboring cells, as well as by compromising barrier integrity in the skin and intestine, allowing microbes to infiltrate the tissue. TNF is believed to trigger cell death in inflammatory diseases due to elevated levels of interfering cytokines, elevated levels of TNFR2 signalling, or genetic mutations. Drugs that target proteins involved in TNF-induced cell death, such as RIPK1, are being evaluated for their efficacy against autoinflammatory diseases.
Cancer
TNF was initially discovered as an agent that kills tumors, particularly sarcomas. However, TNF is now known to play a dual role in cancer, both as a promoter and inhibitor, due to its ability to induce either proliferation or death in tumor cells. The exact mechanisms determining which role TNF plays in cancer are unclear. In general, TNF is considered to be a cancer promoter.
In some cancers, TNF has been shown to play an inhibitory role, primarily when injected locally, repeatedly, and at high concentrations. Due to TNF's adverse side effects, potential TNF cancer treatments seek to maximize cytotoxicity to tumors while minimizing exposure to the entire body. Some treatments increase cytotoxicity by inhibiting the cell survival pathways of tumors before treatment with TNF. Other treatments localize TNF activity using antibody-TNF fusions, also known as immunocytokines. Local TNF treatment has been shown to induce tumor regression, though they rarely induce complete remission. Body-wide administration of TNF has shown low efficacy and high side effects.
In many cancers, TNF is believed to play a supportive role. High TNF expression levels are associated with more advanced cancers, and TNF expression is found in tumor cells from the early stages of disease. TNF expression can lead to the recruitment of white blood cells that promote metastasis, as well as direct activation of pathways that promote tumor survival, invasion, and metastasis. TNF-blockers such as infliximab and etanercept did not induce a response in most advanced or metastatic cancers, but some studies have shown disease stabilization.
Infections
TNF plays a critical role in the innate immune response to infections. Accordingly, the use of TNF blockers is associated with increased risks of infection, such as with Varicella-zoster virus, Epstein–Barr virus, and Cytomegalovirus.
Conversely, TNF plays a role in the progression of HIV by inducing apoptosis of T cells in HIV-infected people. TNF blockage has reportedly led to clinical improvement in HIV without worsening the infection, though data is limited.
Sepsis
TNF is believed to be an important contributor to sepsis due to its ability to upregulate the innate immune system and blood coagulation. In animals, the injection of TNF can produce heart, lung, kidney, and liver dysfunction similar to sepsis. However, in humans suffering from sepsis, TNF is not consistently elevated.
Although TNF blockers showed efficacy in treating sepsis in mice, they showed mixed results in treating sepsis in humans. This is believed to be due to the dual role that TNF plays in the immune system; blocking TNF reduces the severe inflammation that causes sepsis, but also hinders the immune system's ability to resist the infection. It is hypothesized that TNF blockers are more beneficial in cases of severe sepsis, where the probability of death is higher.
Liver fibrosis
TNF is a key player in liver injury and inflammation, but its role in liver fibrosis is controversial. TNF contributes to the activation and survival of hepatic stellate cells (HSCs), believed to be the primary contributors of liver fibrosis. On the other hand, TNF suppresses alpha-1 type-1 collagen expression and HSC proliferation in vitro, which should inhibit liver fibrosis. In general, TNF is considered to promote liver fibrosis by promoting HSC survival. Despite this connection, TNF blockers are not used to treat liver fibrosis. In clinical trials of alcoholic hepatitis, TNF blockers had no significant effect.
Additionally, hepatocyte death, the initial event that drives liver injury and fibrosis, may be induced by TNF, though this connection is uncertain. TNF injection alone does not induce hepatocyte death in vivo. However, when TNF injection is coupled with survival pathway inhibition, such as during hepatitis C virus infection, TNF induces hepatocyte death and acute liver failure. The remnants of dead hepatocytes are consumed by HSCs and Kupffer cells, which then secrete fibrosis-promoting factors, such as TGF-β, as well as promoting further hepatocyte death.
Insulin resistance
TNF promotes insulin resistance by inhibiting insulin receptor substrate 1 (IRS1). Under normal circumstances, IRS1, upon activation by insulin, undergoes tyrosine phosphorylation and increases glucose uptake in the cell. This process is disrupted when TNF induces the serine phosphorylation of IRS1, converting IRS1 into an insulin inhibitor. TNF-induced insulin resistance is common in cases of obesity and can lead to Type II Diabetes. TNF has been found to be upregulated in the adipose tissue of humans and animals with obesity, though it remains unclear why obesity induces high TNF levels.
Nonalcoholic fatty liver disease
TNF plays a key role in nonalcoholic fatty liver disease (NAFLD), in which fat builds up in the liver, leading to injury, inflammation, and scarring. TNF promotes insulin resistance, which promotes fat build up in the liver. As fat builds up in the liver and surrounding adipose tissue, immune cells may infiltrate the expanding tissue and secrete TNF, causing inflammation. Thus, TNF may serve as a causal link between inflammation, insulin resistance, and fat accumulation in the liver. Clinical studies have shown that TNF levels are correlated with the severity of NAFLD, although some studies have shown otherwise. Pharmacological strategies that downregulate TNF have shown favorable effects on NAFLD, while the efficacy of TNF blockers is yet to be evaluated.
Muscle wasting
Conditions that cause inflammation, such as cancer, can elevate TNF levels, which contributes to muscle wasting. TNF contributes to muscle wasting by activating the NF-κB pathway, which activates the ubiquitin–proteasome pathway to degrade protein, and by inhibiting the activation of satellite cells, which are responsible for protein regeneration. However, TNF blockers have had limited effect on muscle wasting in clinical studies, likely due to the multifactorial nature of muscle wasting.
Exercise
During exercise, the level of IL-6, a TNF inhibitor, rapidly increases, leading to an anti-inflammatory effect. This is followed by a subsequent increase in the levels of IL-10 and soluble TNF receptors, both of which also inhibit TNF. While moderate exercise does not increase TNF levels, strenuous exercise has been shown to increase TNF levels two-fold, causing a pro-inflammatory effect. However, this proinflammatory effect is outweighed by the anti-inflammatory effect of IL-6, which can increase 50-fold. Regular exercise has been shown to reduce base TNF levels in the long term. Thus, exercise is generally considered to inhibit TNF, which contributes to the overall anti-inflammatory effect of exercise.
Neuroinflammation
In the central nervous system, TNF is primarily produced by microglia, a type of macrophage, but also by neurons, endothelial cells, and immune cells. Excessive TNF contributes to neuroinflammation by causing excitotoxic neuronal cell death, increasing glutamate levels, activating microglial cells, and disrupting the blood–brain barrier. As a result, TNF is seen to play an important role in central nervous system disorders associated with neuroinflammation, including neurosarcoidosis, multiple sclerosis, Neuro-Behçet's disease.
Paradoxically, TNF-blockers can cause demyelination of neurons and worsen multiple sclerosis symptoms. This is believed to be due to the homeostatic role of TNF in the central nervous system, especially on neuron myelination via TNFR2. The selective blockade of TNFR1 has shown positive outcomes in animal models.
TNF-induced neuroinflammation has also been associated with Alzheimer's disease, and is suspected to contribute to the amyloid-β plaques and tau protein hyperphosphorylation found in the brains of Alzheimer's patients. TNF blockers have been associated with reduced risk of developing Alzheimer's. Some studies have shown TNF blockers to slightly improve cognition in Alzheimer's patients, though larger studies are needed. Since TNF blockers cannot pass through the blood–brain barrier, it is believed that reducing TNF levels across the body also reduces TNF levels within the brain.
TRAPS
In TNF receptor associated periodic syndrome (TRAPS), genetic mutations in TNFR1 lead to defective binding of TNFR1 to TNF, as well as defective shedding of TNFR1, a mechanism that attenuates TNFR1 signalling. This causes periodic inflammation, though the exact mechanism is unknown. TNF blockers such as etanercept have shown partial efficacy in reducing symptoms, while other TNF blockers such as adalimumab and infliximab have been shown to worsen symptoms.
Taste perception
Excessive levels of inflammatory cytokines, such as during infection or autoimmunity, have been associated with anorexia and reduced food intake. It is hypothesized that TNF reduces food intake by increasing sensitivity to bitter taste, though the exact mechanisms of this are unknown.
Pharmacology
TNF blockers
TNF blockers bind to TNF to prevent it from activating its receptors. Additionally, TNF blockers that bind to tmTNF may induce apoptosis in TNF-expressing cells, eliminating inflammatory immune cells. TNF blockers can be monoclonal antibodies, such as infliximab, while others are decoy fusion proteins, like etanercept. New TNF blockers are being developed, including small compounds that can specifically target TNF and monoclonal antibodies with lower immunogenicity potential. Rarely, the suppression of TNF can lead to the development of a new form of "paradoxical" autoimmunity, caused by the overexpression of other cytokines.
References
External links
Cytokines
Immunostimulants | Tumor necrosis factor | [
"Chemistry"
] | 7,451 | [
"Cytokines",
"Signal transduction"
] |
240,850 | https://en.wikipedia.org/wiki/Gene%20silencing | Gene silencing is the regulation of gene expression in a cell to prevent the expression of a certain gene. Gene silencing can occur during either transcription or translation and is often used in research. In particular, methods used to silence genes are being increasingly used to produce therapeutics to combat cancer and other diseases, such as infectious diseases and neurodegenerative disorders.
Gene silencing is often considered the same as gene knockdown. When genes are silenced, their expression is reduced. In contrast, when genes are knocked out, they are completely erased from the organism's genome and, thus, have no expression. Gene silencing is considered a gene knockdown mechanism since the methods used to silence genes, such as RNAi, CRISPR, or siRNA, generally reduce the expression of a gene by at least 70% but do not eliminate it. Methods using gene silencing are often considered better than gene knockouts since they allow researchers to study essential genes that are required for the animal models to survive and cannot be removed. In addition, they provide a more complete view on the development of diseases since diseases are generally associated with genes that have a reduced expression.
Types
Transcriptional
Genomic Imprinting
Paramutation
Transposon silencing (or Histone Modifications)
Transgene silencing
Position effect
RNA-directed DNA methylation
Post-transcriptional
RNA interference
RNA silencing
Nonsense mediated decay
Meiotic
Transvection
Meiotic silencing of unpaired DNA
Research methods
Antisense oligonucleotides
Antisense oligonucleotides were discovered in 1978 by Paul Zamecnik and Mary Stephenson. Oligonucleotides, which are short nucleic acid fragments, bind to complementary target mRNA molecules when added to the cell. These molecules can be composed of single-stranded DNA or RNA and are generally 13–25 nucleotides long. The antisense oligonucleotides can affect gene expression in two ways: by using an RNase H-dependent mechanism or by using a steric blocking mechanism. RNase H-dependent oligonucleotides cause the target mRNA molecules to be degraded, while steric-blocker oligonucleotides prevent translation of the mRNA molecule. The majority of antisense drugs function through the RNase H-dependent mechanism, in which RNase H hydrolyzes the RNA strand of the DNA/RNA heteroduplex. expression.
Ribozymes
Ribozymes are catalytic RNA molecules used to inhibit gene expression. These molecules work by cleaving mRNA molecules, essentially silencing the genes that produced them. Sidney Altman and Thomas Cech first discovered catalytic RNA molecules, RNase P and group II intron ribozymes, in 1989 and won the Nobel Prize for their discovery. Several types of ribozyme motifs exist, including hammerhead, hairpin, hepatitis delta virus, group I, group II, and RNase P ribozymes. Hammerhead, hairpin, and hepatitis delta virus (HDV) ribozyme motifs are generally found in viruses or viroid RNAs. These motifs are able to self-cleave a specific phosphodiester bond on an mRNA molecule. Lower eukaryotes and a few bacteria contain group I and group II ribozymes. These motifs can self-splice by cleaving and joining phosphodiester bonds. The last ribozyme motif, the RNase P ribozyme, is found in Escherichia coli and is known for its ability to cleave the phosphodiester bonds of several tRNA precursors when joined to a protein cofactor.
The general catalytic mechanism used by ribozymes is similar to the mechanism used by protein ribonucleases. These catalytic RNA molecules bind to a specific site and attack the neighboring phosphate in the RNA backbone with their 2' oxygen, which acts as a nucleophile, resulting in the formation of cleaved products with a 2'3'-cyclic phosphate and a 5' hydroxyl terminal end. This catalytic mechanism has been increasingly used by scientists to perform sequence-specific cleavage of target mRNA molecules. In addition, attempts are being made to use ribozymes to produce gene silencing therapeutics, which would silence genes that are responsible for causing diseases.
RNA interference
RNA interference (RNAi) is a natural process used by cells to regulate gene expression. It was discovered in 1998 by Andrew Fire and Craig Mello, who won the Nobel Prize for their discovery in 2006. The process to silence genes first begins with the entrance of a double-stranded RNA (dsRNA) molecule into the cell, which triggers the RNAi pathway. The double-stranded molecule is then cut into small double-stranded fragments by an enzyme called Dicer. These small fragments, which include small interfering RNAs (siRNA) and microRNA (miRNA), are approximately 21–23 nucleotides in length. The fragments integrate into a multi-subunit protein called the RNA-induced silencing complex, which contains Argonaute proteins that are essential components of the RNAi pathway. One strand of the molecule, called the "guide" strand, binds to RISC, while the other strand, known as the "passenger" strand is degraded. The guide or antisense strand of the fragment that remains bound to RISC directs the sequence-specific silencing of the target mRNA molecule. The genes can be silenced by siRNA molecules that cause the endonucleatic cleavage of the target mRNA molecules or by miRNA molecules that suppress translation of the mRNA molecule. With the cleavage or translational repression of the mRNA molecules, the genes that form them are rendered essentially inactive. RNAi is thought to have evolved as a cellular defense mechanism against invaders, such as RNA viruses, or to combat the proliferation of transposons within a cell's DNA. Both RNA viruses and transposons can exist as double-stranded RNA and lead to the activation of RNAi. Currently, siRNAs are being widely used to suppress specific gene expression and to assess the function of genes. Companies utilizing this approach include Alnylam, Sanofi, Arrowhead, Discerna, and Persomics, among others.
Three prime untranslated regions and microRNAs
The three prime untranslated regions (3'UTRs) of messenger RNAs (mRNAs) often contain regulatory sequences that post-transcriptionally cause gene silencing. Such 3'-UTRs often contain both binding sites for microRNAs (miRNAs) as well as for regulatory proteins. By binding to specific sites within the 3'-UTR, a large number of specific miRNAs decrease gene expression of their particular target mRNAs by either inhibiting translation or directly causing degradation of the transcript, using a mechanism similar to RNA interference (see MicroRNA). The 3'-UTR also may have silencer regions that bind repressor proteins that inhibit the expression of an mRNA.
The 3'-UTR often contains microRNA response elements (MREs). MREs are sequences to which miRNAs bind and cause gene silencing. These are prevalent motifs within 3'-UTRs. Among all regulatory motifs within the 3'-UTRs (e.g. including silencer regions), MREs make up about half of the motifs.
As of 2014, the miRBase web site, an archive of miRNA sequences and annotations, listed 28,645 entries in 233 biologic species. Of these, 1,881 miRNAs were in annotated human miRNA loci. miRNAs were predicted to each have an average of about four hundred target mRNAs (causing gene silencing of several hundred genes). Freidman et al. estimate that >45,000 miRNA target sites within human mRNA 3'UTRs are conserved above background levels, and >60% of human protein-coding genes have been under selective pressure to maintain pairing to miRNAs.
Direct experiments show that a single miRNA can reduce the stability of hundreds of unique mRNAs. Other experiments show that a single miRNA may repress the production of hundreds of proteins, but that this repression often is relatively mild (less than 2-fold).
The effects of miRNA dysregulation of gene expression seem to be important in cancer. For instance, in gastrointestinal cancers, nine miRNAs have been identified as epigenetically altered and effective in down regulating DNA repair enzymes.
The effects of miRNA dysregulation of gene expression also seem to be important in neuropsychiatric disorders, such as schizophrenia, bipolar disorder, major depression, Parkinson's disease, Alzheimer's disease and autism spectrum disorders.
Applications
Medical research
Gene silencing techniques have been widely used by researchers to study genes associated with disorders. These disorders include cancer, infectious diseases, respiratory diseases, and neurodegenerative disorders. Gene silencing is also currently being used in drug discovery efforts, such as synthetic lethality, high-throughput screening, and miniaturized RNAi screens.
Cancer
RNA interference has been used to silence genes associated with several cancers. In in vitro studies of chronic myelogenous leukemia (CML), siRNA was used to cleave the fusion protein, BCR-ABL, which prevents the drug Gleevec (imatinib) from binding to the cancer cells. Cleaving the fusion protein reduced the amount of transformed hematopoietic cells that spread throughout the body by increasing the sensitivity of the cells to the drug. RNA interference can also be used to target specific mutants. For instance, siRNAs were able to bind specifically to tumor suppressor p53 molecules containing a single point mutation and destroy it, while leaving the wild-type suppressor intact.
Receptors involved in mitogenic pathways that lead to the increased production of cancer cells there have also been targeted by siRNA molecules. The chemokine receptor chemokine receptor 4 (CXCR4), associated with the proliferation of breast cancer, was cleaved by siRNA molecules that reduced the number of divisions commonly observed by the cancer cells. Researchers have also used siRNAs to selectively regulate the expression of cancer-related genes. Antiapoptotic proteins, such as clusterin and survivin, are often expressed in cancer cells. Clusterin and survivin-targeting siRNAs were used to reduce the number of antiapoptotic proteins and, thus, increase the sensitivity of the cancer cells to chemotherapy treatments. In vivo studies are also being increasingly utilized to study the potential use of siRNA molecules in cancer therapeutics. For instance, mice implanted with colon adenocarcinoma cells were found to survive longer when the cells were pretreated with siRNAs that targeted B-catenin in the cancer cells.
Infectious disease
Viruses
Viral genes and host genes that are required for viruses to replicate or enter the cell, or that play an important role in the life cycle of the virus are often targeted by antiviral therapies. RNAi has been used to target genes in several viral diseases, such as the human immunodeficiency virus (HIV) and hepatitis. In particular, siRNA was used to silence the primary HIV receptor chemokine receptor 5 (CCR5). This prevented the virus from entering the human peripheral blood lymphocytes and the primary hematopoietic stem cells. A similar technique was used to decrease the amount of the detectable virus in hepatitis B and C infected cells. In hepatitis B, siRNA silencing was used to target the surface antigen on the hepatitis B virus and led to a decrease in the number of viral components. In addition, siRNA techniques used in hepatitis C were able to lower the amount of the virus in the cell by 98%.
RNA interference has been in commercial use to control virus diseases of plants for over 20 years (see Plant disease resistance). In 1986–1990, multiple examples of "coat protein-mediated resistance" against plant viruses were published, before RNAi had been discovered. In 1993, work with tobacco etch virus first demonstrated that host organisms can target specific virus or mRNA sequences for degradation, and that this activity is the mechanism behind some examples of virus resistance in transgenic plants. The discovery of small interfering RNAs (the specificity determinant in RNA-mediated gene silencing) also utilized virus-induced post-transcriptional gene silencing in plants. By 1994, transgenic squash varieties had been generated expressing coat protein genes from three different viruses, providing squash hybrids with field-validated multiviral resistance that remain in commercial use at present. Potato lines expressing viral replicase sequences that confer resistance to potato leafroll virus were sold under the trade names NewLeaf Y and NewLeaf Plus, and were widely accepted in commercial production in 1999–2001, until McDonald's Corp. decided not to purchase GM potatoes and Monsanto decided to close their NatureMark potato business. Another frequently cited example of virus resistance mediated by gene silencing involves papaya, where the Hawaiian papaya industry was rescued by virus-resistant GM papayas produced and licensed by university researchers rather than a large corporation. These papayas also remain in use at present, although not without significant public protest, which is notably less evident in medical uses of gene silencing.
Gene silencing techniques have also been used to target other viruses, such as the human papilloma virus, the West Nile virus, and the Tulane virus. The E6 gene in tumor samples retrieved from patients with the human papilloma virus was targeted and found to cause apoptosis in the infected cells. Plasmid siRNA expression vectors used to target the West Nile virus were also able to prevent the replication of viruses in cell lines. In addition, siRNA has been found to be successful in preventing the replication of the Tulane virus, part of the virus family Caliciviridae, by targeting both its structural and non-structural genes. By targeting the NTPase gene, one dose of siRNA 4 hours pre-infection was shown to control Tulane virus replication for 48 hours post-infection, reducing the viral titer by up to 2.6 logarithms. Although the Tulane virus is species-specific and does not affect humans, it has been shown to be closely related to the human norovirus, which is the most common cause of acute gastroenteritis and food-borne disease outbreaks in the United States. Human noroviruses are notorious for being difficult to study in the laboratory, but the Tulane virus offers a model through which to study this family of viruses for the clinical goal of developing therapies that can be used to treat illnesses caused by human norovirus.
Bacteria
Unlike viruses, bacteria are not as susceptible to silencing by siRNA. This is largely due to how bacteria replicate. Bacteria replicate outside of the host cell and do not contain the necessary machinery for RNAi to function. However, bacterial infections can still be suppressed by siRNA by targeting the host genes that are involved in the immune response caused by the infection or by targeting the host genes involved in mediating the entry of bacteria into cells. For instance, siRNA was used to reduce the amount of pro-inflammatory cytokines expressed in the cells of mice treated with lipopolysaccharide (LPS). The reduced expression of the inflammatory cytokine, tumor necrosis factor α (TNFα), in turn, caused a reduction in the septic shock felt by the LPS-treated mice. In addition, siRNA was used to prevent the bacteria, Psueomonas aeruginosa, from invading murine lung epithelial cells by knocking down the caveolin-2 (CAV2) gene. Thus, though bacteria cannot be directly targeted by siRNA mechanisms, they can still be affected by siRNA when the components involved in the bacterial infection are targeted.
Respiratory diseases
Ribozymes, antisense oligonucleotides, and more recently RNAi have been used to target mRNA molecules involved in asthma. These experiments have suggested that siRNA may be used to combat other respiratory diseases, such as chronic obstructive pulmonary disease (COPD) and cystic fibrosis. COPD is characterized by goblet cell hyperplasia and mucus hypersecretion. Mucus secretion was found to be reduced when the transforming growth factor (TGF)-α was targeted by siRNA in NCI-H292 human airway epithelial cells. In addition to mucus hypersecretion, chronic inflammation and damaged lung tissue are characteristic of COPD and asthma. The transforming growth factor TGF-β is thought to play a role in these manifestations. As a result, when interferon (IFN)-γ was used to knock down TGF-β, fibrosis of the lungs, caused by damage and scarring to lung tissue, was improved.
Neurodegenerative disorders
Huntington's disease
Huntington's disease (HD) results from a mutation in the huntingtin gene that causes an excess of CAG repeats. The gene then forms a mutated huntingtin protein with polyglutamine repeats near the amino terminus. This disease is incurable and known to cause motor, cognitive, and behavioral deficits. Researchers have been looking to gene silencing as a potential therapeutic for HD.
Gene silencing can be used to treat HD by targeting the mutant huntingtin protein. The mutant huntingtin protein has been targeted through gene silencing that is allele specific using allele specific oligonucleotides. In this method, the antisense oligonucleotides are used to target single nucleotide polymorphism (SNPs), which are single nucleotide changes in the DNA sequence, since HD patients have been found to share common SNPs that are associated with the mutated huntingtin allele. It has been found that approximately 85% of patients with HD can be covered when three SNPs are targeted. In addition, when antisense oligonucleotides were used to target an HD-associated SNP in mice, there was a 50% decrease in the mutant huntingtin protein.
Non-allele specific gene silencing using siRNA molecules has also been used to silence the mutant huntingtin proteins. Through this approach, instead of targeting SNPs on the mutated protein, all of the normal and mutated huntingtin proteins are targeted. When studied in mice, it was found that siRNA could reduce the normal and mutant huntingtin levels by 75%. At this level, they found that the mice developed improved motor control and a longer survival rate when compared to the controls. Thus, gene silencing methods may prove to be beneficial in treating HD.
Amyotrophic lateral sclerosis
Amyotrophic lateral sclerosis (ALS), also called Lou Gehrig's disease, is a motor neuron disease that affects the brain and spinal cord. The disease causes motor neurons to degenerate, which eventually leads to neuron death and muscular degeneration. Hundreds of mutations in the Cu/Zn superoxide dismutase (SOD1) gene have been found to cause ALS. Gene silencing has been used to knock down the SOD1 mutant that is characteristic of ALS. In specific, siRNA molecules have been successfully used to target the SOD1 mutant gene and reduce its expression through allele-specific gene silencing.
Therapeutics challenges
There are several challenges associated with gene silencing therapies, including delivery and specificity for targeted cells. For instance, for treatment of neurodegenerative disorders, molecules for a prospective gene silencing therapy must be delivered to the brain. The blood–brain barrier makes it difficult to deliver molecules into the brain through the bloodstream by preventing the passage of the majority of molecules that are injected or absorbed into the blood. Thus, researchers have found that they must directly inject the molecules or implant pumps that push them into the brain.
Once inside the brain, however, the molecules must move inside of the targeted cells. In order to efficiently deliver siRNA molecules into the cells, viral vectors can be used. Nevertheless, this method of delivery can also be problematic as it can elicit an immune response against the molecules. In addition to delivery, specificity has also been found to be an issue in gene silencing. Both antisense oligonucleotides and siRNA molecules can potentially bind to the wrong mRNA molecule. Thus, researchers are searching for more efficient methods to deliver and develop specific gene silencing therapeutics that are still safe and effective.
Food
Arctic Apples are a suite of trademarked apples that contain a nonbrowning trait created by using gene silencing to reduce the expression of polyphenol oxidase (PPO). It is the first approved food product to use this technique.
See also
CRISPR
DNA-directed RNA interference
Gene drive
Gene knockdown
PPRHs
References
External links
RNAiAtlas - database of siRNA libraries and their target analysis results.
Science project: Transgenic apple varieties Approaches to preventing outcrossing – possible effects on micro-organisms
Research project: New Cost-effective method for gene silencing
Gene expression
Epigenetics
DNA
RNA
Post-translational modification
Australian inventions | Gene silencing | [
"Chemistry",
"Biology"
] | 4,429 | [
"Gene expression",
"Biochemical reactions",
"Post-translational modification",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
240,935 | https://en.wikipedia.org/wiki/SN1%20reaction | {{DISPLAYTITLE:SN1 reaction}}
The unimolecular nucleophilic substitution (SN1) reaction is a substitution reaction in organic chemistry. The Hughes-Ingold symbol of the mechanism expresses two properties—"SN" stands for "nucleophilic substitution", and the "1" says that the rate-determining step is unimolecular. Thus, the rate equation is often shown as having first-order dependence on the substrate and zero-order dependence on the nucleophile. This relationship holds for situations where the amount of nucleophile is much greater than that of the intermediate. Instead, the rate equation may be more accurately described using steady-state kinetics. The reaction involves a carbocation intermediate and is commonly seen in reactions of secondary or tertiary alkyl halides under strongly basic conditions or, under strongly acidic conditions, with secondary or tertiary alcohols. With primary and secondary alkyl halides, the alternative SN2 reaction occurs. In inorganic chemistry, the SN1 reaction is often known as the dissociative substitution. This dissociation pathway is well-described by the cis effect. A reaction mechanism was first introduced by Christopher Ingold et al. in 1940. This reaction does not depend much on the strength of the nucleophile, unlike the SN2 mechanism. This type of mechanism involves two steps. The first step is the ionization of alkyl halide in the presence of aqueous acetone or ethyl alcohol. This step provides a carbocation as an intermediate.
In the first step of SN1 mechanism, a carbocation is formed which is planar and hence attack of nucleophile (second step) may occur from either side to give a racemic product, but actually complete racemization does not take place. This is because the nucleophilic species attacks the carbocation even before the departing halides ion has moved sufficiently away from the carbocation. The negatively charged halide ion shields the carbocation from being attacked on the front side, and backside attack, which leads to inversion of configuration, is preferred. Thus the actual product no doubt consists of a mixture of enantiomers but the enantiomers with inverted configuration would predominate and complete racemization does not occur.
Mechanism
An example of a reaction taking place with an SN1 reaction mechanism is the hydrolysis of tert-butyl bromide forming tert-butanol:
This SN1 reaction takes place in three steps:
Formation of a tert-butyl carbocation by separation of a leaving group (a bromide anion) from the carbon atom: this step is slow.
Nucleophilic attack: the carbocation reacts with the nucleophile. If the nucleophile is a neutral molecule (i.e. a solvent) a third step is required to complete the reaction. When the solvent is water, the intermediate is an oxonium ion. This reaction step is fast.
Deprotonation: Removal of a proton on the protonated nucleophile by water acting as a base forming the alcohol and a hydronium ion. This reaction step is fast.
Rate law
Although the rate law of the SN1 reaction is often regarded as being first order in alkyl halide and zero order in nucleophile, this is a simplification that holds true only under certain conditions. While it, too, is an approximation, the rate law derived from the steady state approximation (SSA) provides more insight into the kinetic behavior of the SN1 reaction. Consider the following reaction scheme for the mechanism shown above:
Though a relatively stable tertiary carbocation, tert-butyl cation is a high-energy species that is present only at very low concentration and cannot be directly observed under normal conditions. Thus, the SSA can be applied to this species:
(1) Steady state assumption:
(2) Concentration of t-butyl cation, based on steady state assumption:
(3) Overall reaction rate, assuming rapid final step:
(4) Steady state rate law, by plugging (2) into (3):
Under normal synthetic conditions, the entering nucleophile is more nucleophilic than the leaving group and is present in excess. Moreover, kinetic experiments are often conducted under initial rate conditions (5 to 10% conversion) and without the addition of bromide, so is negligible. For these reasons, often holds. Under these conditions, the SSA rate law reduces to:
the simple first-order rate law described in introductory textbooks. Under these conditions, the concentration of the nucleophile does not affect the rate of the reaction, and changing the nucleophile (e.g. from H2O to MeOH) does not affect the reaction rate, though the product is, of course, different. In this regime, the first step (ionization of the alkyl bromide) is slow, rate-determining, and irreversible, while the second step (nucleophilic addition) is fast and kinetically invisible.
However, under certain conditions, non-first-order reaction kinetics can be observed. In particular, when a large concentration of bromide is present while the concentration of water is limited, the reverse of the first step becomes important kinetically. As the SSA rate law indicates, under these conditions there is a fractional (between zeroth and first order) dependence on [H2O], while there is a negative fractional order dependence on [Br–]. Thus, SN1 reactions are often observed to slow down when an exogenous source of the leaving group (in this case, bromide) is added to the reaction mixture. This is known as the common ion effect and the observation of this effect is evidence for an SN1 mechanism (although the absence of a common ion effect does not rule it out).
Scope
The SN1 mechanism tends to dominate when the central carbon atom is surrounded by bulky groups because such groups sterically hinder the SN2 reaction. Additionally, bulky substituents on the central carbon increase the rate of carbocation formation because of the relief of steric strain that occurs. The resultant carbocation is also stabilized by both inductive stabilization and hyperconjugation from attached alkyl groups. The Hammond–Leffler postulate suggests that this, too, will increase the rate of carbocation formation. The SN1 mechanism therefore dominates in reactions at tertiary alkyl centers.
An example of a reaction proceeding in a SN1 fashion is the synthesis of 2,5-dichloro-2,5-dimethylhexane from the corresponding diol with concentrated hydrochloric acid:
As the alpha and beta substitutions increase with respect to leaving groups, the reaction is diverted from SN2 to SN1.
Stereochemistry
The carbocation intermediate formed in the reaction's rate determining step (RDS) is an sp2 hybridized carbon with trigonal planar molecular geometry. This allows two different ways for the nucleophilic attack, one on either side of the planar molecule. If neither approach is favored, then these two ways occur equally, yielding a racemic mixture of enantiomers if the reaction takes place at a stereocenter. This is illustrated below in the SN1 reaction of S-3-chloro-3-methylhexane with an iodide ion, which yields a racemic mixture of 3-iodo-3-methylhexane:
However, an excess of one stereoisomer can be observed, as the leaving group can remain in proximity to the carbocation intermediate for a short time and block nucleophilic attack. This stands in contrast to the SN2 mechanism, which is a stereospecific mechanism where stereochemistry is always inverted as the nucleophile comes in from the rear side of the leaving group.
Side reactions
Two common side reactions are elimination reactions and carbocation rearrangement. If the reaction is performed under warm or hot conditions (which favor an increase in entropy), E1 elimination is likely to predominate, leading to formation of an alkene. At lower temperatures, SN1 and E1 reactions are competitive reactions and it becomes difficult to favor one over the other. Even if the reaction is performed cold, some alkene may be formed. If an attempt is made to perform an SN1 reaction using a strongly basic nucleophile such as hydroxide or methoxide ion, the alkene will again be formed, this time via an E2 elimination. This will be especially true if the reaction is heated. Finally, if the carbocation intermediate can rearrange to a more stable carbocation, it will give a product derived from the more stable carbocation rather than the simple substitution product.
Solvent effects
Since the SN1 reaction involves formation of an unstable carbocation intermediate in the rate-determining step (RDS), anything that can facilitate this process will speed up the reaction. The normal solvents of choice are both polar (to stabilize ionic intermediates in general) and protic solvents (to solvate the leaving group in particular). Typical polar protic solvents include water and alcohols, which will also act as nucleophiles, and the process is known as solvolysis.
The Y scale correlates solvolysis reaction rates of any solvent (k) with that of a standard solvent (80% v/v ethanol/water) (k0) through
with m a reactant constant (m = 1 for tert-butyl chloride) and Y a solvent parameter. For example, 100% ethanol gives Y = −2.3, 50% ethanol in water Y = +1.65 and 15% concentration Y = +3.2.
See also
Arrow pushing
Nucleophilic acyl substitution
Neighbouring group participation
SN2 reaction
References
External links
Diagrams: Frostburg State University
Exercise: the University of Maine
Nucleophilic substitution reactions
Reaction mechanisms | SN1 reaction | [
"Chemistry"
] | 2,133 | [
"Reaction mechanisms",
"Chemical kinetics",
"Physical organic chemistry"
] |
240,949 | https://en.wikipedia.org/wiki/Markovnikov%27s%20rule | In organic chemistry, Markovnikov's rule or Markownikoff's rule describes the outcome of some addition reactions. The rule was formulated by Russian chemist Vladimir Markovnikov in 1870.
Explanation
The rule states that with the addition of a protic acid HX or other polar reagent to an asymmetric alkene, the acid hydrogen (H) or electropositive part gets attached to the carbon with more hydrogen substituents, and the halide (X) group or electronegative part gets attached to the carbon with more alkyl substituents. This is in contrast to Markovnikov's original definition, in which the rule states that the X component is added to the carbon with the fewest hydrogen atoms while the hydrogen atom is added to the carbon with the greatest number of hydrogen atoms.
The same is true when an alkene reacts with water in an additional reaction to form an alcohol that involves carbocation formation. The hydroxyl group (OH) bonds to the carbon that has the greater number of carbon-carbon bonds, while the hydrogen bonds to the carbon on the other end of the double bond, that has more carbon–hydrogen bonds.
The chemical basis for Markovnikov's Rule is the formation of the most stable carbocation during the addition process. Adding the hydrogen ion to one carbon atom in the alkene creates a positive charge on the other carbon, forming a carbocation intermediate. The more substituted the carbocation, the more stable it is, due to induction and hyperconjugation. The major product of the addition reaction will be the one formed from the more stable intermediate. Therefore, the major product of the addition of HX (where X is some atom more electronegative than H) to an alkene has the hydrogen atom in the less substituted position and X in the more substituted position. But the other less substituted, less stable carbocation will still be formed at some concentration and will proceed to be the minor product with the opposite, conjugate attachment of X.
Anti-Markovnikov reactions
Also called Kharasch effect (named after Morris S. Kharasch), these reactions that do not involve a carbocation intermediate may react through other mechanisms that have regioselectivities not dictated by Markovnikov's rule, such as free radical addition. Such reactions are said to be anti-Markovnikov, since the halogen adds to the less substituted carbon, the opposite of a Markovnikov reaction.
The anti-Markovnikov rule can be illustrated using the addition of hydrogen bromide to isobutylene in the presence of benzoyl peroxide or hydrogen peroxide. The reaction of HBr with substituted alkenes was prototypical in the study of free-radical additions. Early chemists discovered that the reason for the variability in the ratio of Markovnikov to anti-Markovnikov reaction products was due to the unexpected presence of free radical ionizing substances such as peroxides. The explanation is that the O-O bond in peroxides is relatively weak. With the aid of light, heat, or sometimes even just acting on its own, the O-O bond can split to form 2 radicals. The radical groups can then interact with HBr to produce a Br radical, which then reacts with the double bond. Since the bromine atom is relatively large, it is more likely to encounter and react with the least substituted carbon since this interaction produces less static interactions between the carbon and the bromine radical. Furthermore, similar to a positive charged species, the radical species is most stable when the unpaired electron is in the more substituted position. The radical intermediate is stabilized by hyperconjugation. In the more substituted position, more carbon-hydrogen bonds are aligned with the radical's electron deficient molecular orbital. This means that there are greater hyperconjugation effects, so that position is more favorable. In this case, the terminal carbon is a reactant that produces a primary addition product instead of a secondary addition product.
A new method of anti-Markovnikov addition has been described by Hamilton and Nicewicz, who utilize aromatic molecules and light energy from a low-energy diode to turn the alkene into a cation radical.
Anti-Markovnikov behaviour extends to more chemical reactions than additions to alkenes. Anti-Markovnikov behaviour is observed in the hydration of phenylacetylene by auric catalysis, which gives acetophenone; although with a special ruthenium catalyst it provides the other regioisomer 2-phenylacetaldehyde:
Anti-Markovnikov behavior can also manifest itself in certain rearrangement reactions. In a titanium(IV) chloride-catalyzed formal nucleophilic substitution at enantiopure 1 in the scheme below, two products are formed – 2a and 2b Due to the two chiral centers in the target molecule, the carbon carrying chlorine and the carbon carrying the methyl and acetoxyethyl group, four different compounds are to be formed: 1R,2R- (drawn as 2b) 1R,2S- 1S,2R- (drawn as 2a) and 1S,2S- . Therefore, both of the depicted structures will exist in a D- and an L-form. :
This product distribution can be rationalized by assuming that loss of the hydroxy group in 1 gives the tertiary carbocation A, which rearranges to the seemingly less stable secondary carbocation B. Chlorine can approach this center from two faces leading to the observed mixture of isomers.
Another notable example of anti-Markovnikov addition is hydroboration.
See also
Kharasch addition
Zaitsev's rule
Hofmann's rule
References
External links
Eponymous chemical rules
Physical organic chemistry
Rules of thumb | Markovnikov's rule | [
"Chemistry"
] | 1,232 | [
"Physical organic chemistry"
] |
240,972 | https://en.wikipedia.org/wiki/White%20hole | In general relativity, a white hole is a hypothetical region of spacetime and singularity that cannot be entered from the outside, although energy-matter, light and information can escape from it. In this sense, it is the reverse of a black hole, from which energy-matter, light and information cannot escape. White holes appear in the theory of eternal black holes. In addition to a black hole region in the future, such a solution of the Einstein field equations has a white hole region in its past. This region does not exist for black holes that have formed through gravitational collapse, however, nor are there any observed physical processes through which a white hole could be formed.
Supermassive black holes (SMBHs) are theoretically predicted to be at the center of every galaxy and may be essential for their formation. Stephen Hawking and others have proposed that these supermassive black holes could spawn supermassive white holes.
Overview
Like black holes, white holes have properties such as mass, charge, and angular momentum. They attract matter like any other mass, but objects falling towards a white hole would never actually reach the white hole's event horizon (though in the case of the maximally extended Schwarzschild solution, discussed below, the white hole event horizon in the past becomes a black hole event horizon in the future, so any object falling towards it will eventually reach the black hole horizon). Imagine a gravitational field, without a surface. Acceleration due to gravity is the greatest on the surface of any body. But since black holes lack a surface, acceleration due to gravity increases exponentially, but never reaches a final value as there is no considered surface in a singularity.
In quantum mechanics, the black hole emits Hawking radiation and so it can come to thermal equilibrium with a gas of radiation (not compulsory). Because a thermal-equilibrium state is time-reversal-invariant, Stephen Hawking argued that the time reversal of a black hole in thermal equilibrium results in a white hole in thermal equilibrium (each absorbing and emitting energy to equivalent degrees). Consequently, this may imply that black holes and white holes are reciprocal in structure, wherein the Hawking radiation from an ordinary black hole is identified with a white hole's emission of energy and matter. Hawking's semi-classical argument is reproduced in a quantum mechanical AdS/CFT treatment, where a black hole in anti-de Sitter space is described by a thermal gas in a gauge theory, whose time reversal is the same as itself.
History
In the 1930s, physicists Robert Oppenheimer and Hartland Snyder introduced the idea of white holes as a solution to Einstein's equations of general relativity. These equations, the foundation of modern physics, describe the curvature of spacetime due to massive objects. Whereas black holes are born from the collapse of stars, white holes represent the theoretical birth of space, time, and potentially even universes. At the center, space and time do not end into a singularity, but continue across a short transition region where the Einstein equations are violated by quantum effects. From this region, space and time emerge with the structure of a white hole interior, a possibility already suggested by John Lighton Synge.
The possibility of the existence of white holes was put forward by cosmologist Igor Novikov in 1964, developed by Nikolai Kardashev. White holes are predicted as part of a solution to the Einstein field equations known as the maximally extended version of the Schwarzschild metric describing an eternal black hole with no charge and no rotation. Here, "maximally extended" implies that spacetime should not have any "edges". For any possible trajectory of a free-falling particle (following a geodesic) in spacetime, it should be possible to continue this path arbitrarily far into the particle's future, unless the trajectory hits a gravitational singularity like the one at the center of the black hole's interior. In order to satisfy this requirement, it turns out that in addition to the black hole interior region that particles enter when they fall through the event horizon from the outside, there must be a separate white hole interior region, which allows us to extrapolate the trajectories of particles that an outside observer sees rising up away from the event horizon. For an observer outside using Schwarzschild coordinates, infalling particles take an infinite time to reach the black hole horizon infinitely far in the future, while outgoing particles that pass the observer have been traveling outward for an infinite time since crossing the white hole horizon infinitely far in the past (however, the particles or other objects experience only a finite proper time between crossing the horizon and passing the outside observer). The black hole/white hole appears "eternal" from the perspective of an outside observer, in the sense that particles traveling outward from the white hole interior region can pass the observer at any time, and particles traveling inward, which will eventually reach the black hole interior region can also pass the observer at any time.
Just as there are two separate interior regions of the maximally extended spacetime, there are also two separate exterior regions, sometimes called two different "universes", with the second universe allowing us to extrapolate some possible particle trajectories in the two interior regions. This means that the interior black-hole region can contain a mix of particles that fell in from either universe (and thus an observer who fell in from one universe might be able to see light that fell in from the other one), and likewise particles from the interior white-hole region can escape into either universe. All four regions can be seen in a spacetime diagram that uses Kruskal–Szekeres coordinates (see figure).
In this spacetime, it is possible to come up with coordinate systems such that if you pick a hypersurface of constant time (a set of points that all have the same time coordinate, such that every point on the surface has a space-like separation, giving what is called a 'space-like surface') and draw an "embedding diagram" depicting the curvature of space at that time, the embedding diagram will look like a tube connecting the two exterior regions, known as an "Einstein-Rosen bridge" or Schwarzschild wormhole. Depending on where the space-like hypersurface is chosen, the Einstein-Rosen bridge can either connect two black hole event horizons in each universe (with points in the interior of the bridge being part of the black hole region of the spacetime), or two white hole event horizons in each universe (with points in the interior of the bridge being part of the white hole region). It is impossible to use the bridge to cross from one universe to the other, however, because it is impossible to enter a white hole event horizon from the outside, and anyone entering a black hole horizon from either universe will inevitably hit the black hole singularity.
Note that the maximally extended Schwarzschild metric describes an idealized black hole/white hole that exists eternally from the perspective of external observers; a more realistic black hole that forms at some particular time from a collapsing star would require a different metric. When the infalling stellar matter is added to a diagram of a black hole's history, it removes the part of the diagram corresponding to the white hole interior region. But because the equations of general relativity are time-reversible – they exhibit Time reversal symmetry – general relativity must also allow the time-reverse of this type of "realistic" black hole that forms from collapsing matter. The time-reversed case would be a white hole that has existed since the beginning of the universe, and that emits matter until it finally "explodes" and disappears. Despite the fact that such objects are permitted theoretically, they are not taken as seriously as black holes by physicists, since there would be no processes that would naturally lead to their formation; they could exist only if they were built into the initial conditions of the Big Bang. Additionally, it is predicted that such a white hole would be highly "unstable" in the sense that if any small amount of matter fell towards the horizon from the outside, this would prevent the white hole's explosion as seen by distant observers, with the matter emitted from the singularity never able to escape the white hole's gravitational radius.
Properties
Depending on the type of black hole solution considered, there are several types of white holes. In the case of the Schwarzschild black hole mentioned above, a geodesic coming out of a white hole comes from the "gravitational singularity" it contains. In the case of a black hole possessing an electric charge ψ ** Ώ ** ώ (Reissner-Nordström black hole) or an angular momentum, then the white hole happens to be the "exit door" of a black hole existing in another universe. Such a black hole – white hole configuration is called a wormhole. In both cases, however, it is not possible to reach the region "in" the white hole, so the behavior of it – and, in particular, what may come out of it – is completely impossible to predict. In this sense, a white hole is a configuration according to which the evolution of the universe cannot be predicted, because it is not deterministic. A "bare singularity" is another example of a non-deterministic configuration, but does not have the status of a white hole, however, because there is no region inaccessible from a given region. In its basic conception, the Big Bang can be seen as a naked singularity in outer space, but does not correspond to a white hole.
Physical relevance
In its mode of formation, a black hole comes from a residue of a massive star whose core contracts until it turns into a black hole. Such a configuration is not static: we start from a massive and extended body which contracts to give a black hole. The black hole therefore does not exist for all eternity, and there is no corresponding white hole.
To be able to exist, a white hole must either arise from a physical process leading to its formation, or be present from the creation of the universe. None of these solutions appears satisfactory: there is no known astrophysical process that can lead to the formation of such a configuration, and imposing it from the creation of the universe amounts to assuming a very specific set of initial conditions which has no concrete motivation.
In view of the enormous quantities radiated by quasars, whose luminosity makes it possible to observe them from several billion light-years away, it had been assumed that they were the seat of exotic physical phenomena such as a white hole, or a phenomenon of continuous creation of matter (see the article on the steady state theory). These ideas are now abandoned, the observed properties of quasars being very well explained by those of an accretion disk in the center of which is a supermassive black hole.
Big Bang/Supermassive White Hole
A view of black holes first proposed in the late 1980s might be interpreted as shedding some light on the nature of classical white holes. Some researchers have proposed that when a black hole forms, a Big Bang may occur at the core/singularity, which would create a new universe that expands outside of the parent universe.
The Einstein–Cartan–Sciama–Kibble theory of gravity extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamical variable. Torsion naturally accounts for the quantum-mechanical, intrinsic angular momentum (spin) of matter. According to general relativity, the gravitational collapse of a sufficiently compact mass forms a singular black hole. In the Einstein–Cartan theory, however, the minimal coupling between torsion and Dirac spinors generates a repulsive spin–spin interaction that is significant in fermionic matter at extremely high densities. Such an interaction prevents the formation of a gravitational singularity. Instead, the collapsing matter on the other side of the event horizon reaches an enormous but finite density and rebounds, forming a regular Einstein–Rosen bridge. The other side of the bridge becomes a new, growing baby universe. For observers in the baby universe, the parent universe appears as the only white hole. Accordingly, the observable universe is the Einstein–Rosen interior of a black hole existing as one of possibly many inside a larger universe. The Big Bang was a nonsingular Big Bounce at which the observable universe had a finite, minimum scale factor.
Shockwave cosmology, proposed by Joel Smoller and Blake Temple in 2003, has the “big bang” as an explosion inside a black hole, producing the expanding volume of space and matter that includes the observable universe. This black hole eventually becomes a white hole as the matter density reduces with the expansion. A related theory gives an alternative to dark energy.
A 2012 paper argues that the Big Bang itself is a white hole. It further suggests that the emergence of a white hole, which was named a "Small Bang", is spontaneous—all the matter is ejected at a single pulse. Thus, unlike black holes, white holes cannot be continuously observed; rather, their effects can be detected only around the event itself. The paper even proposed identifying a new group of gamma-ray bursts with white holes.
Various hypotheses
Unlike black holes for which there is a well-studied physical process, gravitational collapse (which gives rise to black holes when a star somewhat more massive than the sun exhausts its nuclear "fuel"), there is no clear analogous process that leads reliably to the production of white holes. Although some hypotheses have been put forward:
White holes as a kind of "exit" from black holes, both types of singularities would probably be connected by a wormhole (note that, like white holes, wormholes have not yet been found); when quasars were discovered it was assumed that these were the sought-after white holes but this assumption has now been discarded.
Another widespread idea is that white holes would be very unstable, would last a very short time and even after forming could collapse and become black holes.
Astronomers Alon Retter and Shlomo Heller suggest that the GRB 060614 anomalous gamma-ray burst that occurred in 2006 was a "white hole".
In 2014, the idea of the Big Bang being produced by a supermassive white hole explosion was explored in the framework of a five-dimensional vacuum by Madriz Aguilar, Moreno and Bellini.
Finally, it has been postulated that white holes could be the temporal inverse of a black hole.
At present, very few scientists believe in the existence of white holes and it is considered only a mathematical exercise with no real-world counterpart.
In popular culture
A white hole appears in the Red Dwarf episode of the same name, wherein the protagonists must find a way to deal with its temporal effects.
A white hole serves as a major source of conflict in the Yu-Gi-Oh! GX anime, as the radiance it exudes is both sentient and evil, known as the Light of Destruction.
A white hole serves as a very important location in the video game Outer Wilds. In this game, falling into the black hole in the center of the planet Brittle Hollow leads to this white hole.
A white hole appears in the animated television series Voltron: Legendary Defender.
See also
Arrow of time
White hole cosmology
Big Bounce
Gravitational singularity
Black hole
Black hole cosmology
Conformal cyclic cosmology
Dark matter
Dark energy
Exotic matter
Naked singularity
Antiparticle
Antimatter
Negative mass
Negative energy
Planck star
Quantum mechanics
Spacetime
Star
Wormhole
Quasar
Q star
Solar System
Multiverse
Many-worlds interpretation
References
External links
Embedding of the inverted Schwarzschild Solution 2d plot White hole in Google
Schwarzschild Wormholes.
Schwarzschild Wormhole animation.
Shockwave cosmology inside a Black Hole
Michio Kaku: Mr Parallel Universe
End of Black Hole Is Starting of Big Bang – Discussed in Newsgroup in 1999
Forward to the Future 1:Trapped in Time!.
Forward to the Future 2:Back to the Past, with Interest....
White holes
Gravity
Concepts in astronomy
General relativity
Hypothetical astronomical objects
Astrophysics | White hole | [
"Physics",
"Astronomy"
] | 3,325 | [
"Black holes",
"Astronomical hypotheses",
"Concepts in astronomy",
"Unsolved problems in physics",
"Astronomical myths",
"Astrophysics",
"General relativity",
"White holes",
"Hypothetical astronomical objects",
"Theory of relativity",
"Astronomical objects",
"Astronomical sub-disciplines"
] |
240,986 | https://en.wikipedia.org/wiki/Oxymercuration%20reaction | In organic chemistry, the oxymercuration reaction is an electrophilic addition reaction that transforms an alkene () into a neutral alcohol. In oxymercuration, the alkene reacts with mercuric acetate () in aqueous solution to yield the addition of an acetoxymercury () group and a hydroxy () group across the double bond. Carbocations are not formed in this process and thus rearrangements are not observed. The reaction follows Markovnikov's rule (the hydroxy group will always be added to the more substituted carbon). The oxymercuration part of the reaction involves anti addition of OH group but the demercuration part of the reaction involves free radical mechanism and is not stereospecific, i.e. H and OH may be syn or anti to each other.
Oxymercuration followed by reductive demercuration is called an oxymercuration–reduction reaction or oxymercuration–demercuration reaction. This reaction, which is almost always done in practice instead of oxymercuration, is treated at the conclusion of the article.
Mechanism
Oxymercuration can be fully described in three steps (the whole process is sometimes called deoxymercuration), which is illustrated in stepwise fashion to the right. In the first step, the nucleophilic double bond attacks the mercury ion, ejecting an acetoxy group. The electron pair on the mercury ion in turn attacks a carbon on the double bond, forming a mercurinium ion in which the mercury atom bears a positive charge. The electrons in the highest occupied molecular orbital of the double bond are donated to mercury's empty 6s orbital and the electrons in mercury's dxz (or dyz) orbital are donated in the lowest unoccupied molecular orbital of the double bond.
In the second step, the nucleophilic water molecule attacks the more substituted carbon, liberating the electrons participating in its bond with mercury. The electrons collapse to the mercury ion and neutralize it. The oxygen in the water molecule now bears a positive charge.
In the third step, a negatively charged acetate ion deprotonates the alkyloxonium ion, forming the waste product HOAc. The two electrons participating in the bond between oxygen and the attacked hydrogen collapse into the oxygen, neutralizing its charge and creating the final alcohol product.
Regioselectivity and stereochemistry
Oxymercuration is very regioselective and is a textbook Markovnikov reaction; ruling out extreme cases, the water nucleophile will always preferentially attack the more substituted carbon, depositing the resultant hydroxy group there. This phenomenon is explained by examining the three resonance structures of the mercuronium ion formed at the end of the step one.
By inspection of these structures, it is seen that the positive charge of the mercury atom will sometimes reside on the more substituted carbon (approximately 4% of the time). This forms a temporary tertiary carbocation, which is a very reactive electrophile. The nucleophile will attack the mercuronium ion at this time. Therefore, the nucleophile attacks the more substituted carbon because it retains a more positive character than the lesser substituted carbon.
Stereochemically, oxymercuration is an anti addition. As illustrated by the second step, the nucleophile cannot attack the carbon from the same face as the mercury ion because of steric hindrance. There is simply insufficient room on that face of the molecule to accommodate both a mercury ion and the attacking nucleophile. Therefore, when free rotation is impossible, the hydroxy and acetoxymercuri groups will always be trans to each other.
Shown below is an example of regioselectivity and stereospecificity of the oxymercuration reaction with substituted cyclohexenes. A bulky group like t-butyl locks the ring in a chair conformation and prevents ring flips. With 4-t-butylcyclohexene, oxymercuration yields two products – where addition across the double bond is always anti – with slight preference towards acetoxymercury group trans to the t-butyl group, resulting in slightly more cis product. With 1-methyl-4-t-butylcyclohexene, oxymercuration yields only one product – still anti addition across the double bond – where water only attacks the more substituted carbon. The reason for anti addition across the double bond is to maximize orbital overlap of the lone pair of water and the empty orbital of the mercuronium ion on the opposite side of the acetoxymercury group. Regioselectivity is observed to favor water attacking the more substituted carbon, but water does not add syn across the double bond which implies that the transition state favors water attacking from the opposite side of the acetomercury group.
Oxymercuration–reduction
In practice, the mercury adduct product created by the oxymercuration reaction is almost always treated with sodium borohydride (NaBH4) in aqueous base in a reaction called demercuration. In demercuration, the acetoxymercury group is replaced with a hydrogen in a stereochemically insensitive reaction known as reductive elimination. The combination of oxymercuration followed immediately by demercuration is called an oxymercuration–reduction reaction.
Therefore, the oxymercuration-reduction reaction is the net addition of water across the double bond. Any stereochemistry set up by the oxymercuration step is scrambled by the demercuration step, so that the hydrogen and hydroxy group may be cis or trans from each other. Oxymercuration reduction is a popular laboratory technique to achieve alkene hydration with Markovnikov selectivity while avoiding carbocation intermediates and thus the rearrangement which can lead to complex product mixtures.
Other applications
Oxymercuration is not limited to an alkene reacting with water to add hydroxyl and mercury groups. The carbon–mercury structure can undergo spontaneous replacement of the mercury by hydrogen, rather than persisting until a separate reduction step. In this manner, the effect is for mercury to be a Lewis acid catalyst. For example, using an alkyne instead of an alkene yields an enol, which tautomerizes into a ketone. Using an alcohol instead of water yields an ether (see also Hofmann-Sand reaction). In both cases, Markovnikov's rule is observed.
Using a vinyl ether in the presence of an alcohol allows the replacement of one alkoxy group (RO–) for another by way of an acetal intermediate. An allyl alcohol and a vinyl ether under the conditions of oxymercuration–reaction can give R–CH=CH–CH2–O–CH=CH2, which is suitable for a Claisen rearrangement.
See also
Hydroboration–oxidation reaction
Mukaiyama hydration
References
Addition reactions
Carbon-heteroatom bond forming reactions | Oxymercuration reaction | [
"Chemistry"
] | 1,494 | [
"Carbon-heteroatom bond forming reactions",
"Organic reactions"
] |
241,017 | https://en.wikipedia.org/wiki/Adrenergic%20receptor | The adrenergic receptors or adrenoceptors are a class of G protein-coupled receptors that are targets of many catecholamines like norepinephrine (noradrenaline) and epinephrine (adrenaline) produced by the body, but also many medications like beta blockers, beta-2 (β2) antagonists and alpha-2 (α2) agonists, which are used to treat high blood pressure and asthma, for example.
Many cells have these receptors, and the binding of a catecholamine to the receptor will generally stimulate the sympathetic nervous system (SNS). The SNS is responsible for the fight-or-flight response, which is triggered by experiences such as exercise or fear-causing situations. This response dilates pupils, increases heart rate, mobilizes energy, and diverts blood flow from non-essential organs to skeletal muscle. These effects together tend to increase physical performance momentarily.
History
By the turn of the 19th century, it was agreed that the stimulation of sympathetic nerves could cause different effects on body tissues, depending on the conditions of stimulation (such as the presence or absence of some toxin). Over the first half of the 20th century, two main proposals were made to explain this phenomenon:
There were (at least) two different types of neurotransmitters released from sympathetic nerve terminals, or
There were (at least) two different types of detector mechanisms for a single neurotransmitter.
The first hypothesis was championed by Walter Bradford Cannon and Arturo Rosenblueth, who interpreted many experiments to then propose that there were two neurotransmitter substances, which they called sympathin E (for 'excitation') and sympathin I (for 'inhibition').
The second hypothesis found support from 1906 to 1913, when Henry Hallett Dale explored the effects of adrenaline (which he called adrenine at the time), injected into animals, on blood pressure. Usually, adrenaline would increase the blood pressure of these animals. Although, if the animal had been exposed to ergotoxine, the blood pressure decreased. He proposed that the ergotoxine caused "selective paralysis of motor myoneural junctions" (i.e. those tending to increase the blood pressure) hence revealing that under normal conditions that there was a "mixed response", including a mechanism that would relax smooth muscle and cause a fall in blood pressure. This "mixed response", with the same compound causing either contraction or relaxation, was conceived of as the response of different types of junctions to the same compound.
This line of experiments were developed by several groups, including DT Marsh and colleagues, who in February 1948 showed that a series of compounds structurally related to adrenaline could also show either contracting or relaxing effects, depending on whether or not other toxins were present. This again supported the argument that the muscles had two different mechanisms by which they could respond to the same compound. In June of that year, Raymond Ahlquist, Professor of Pharmacology at Medical College of Georgia, published a paper concerning adrenergic nervous transmission. In it, he explicitly named the different responses as due to what he called α receptors and β receptors, and that the only sympathetic transmitter was adrenaline. While the latter conclusion was subsequently shown to be incorrect (it is now known to be noradrenaline), his receptor nomenclature and concept of two different types of detector mechanisms for a single neurotransmitter, remains. In 1954, he was able to incorporate his findings in a textbook, Drill's Pharmacology in Medicine, and thereby promulgate the role played by α and β receptor sites in the adrenaline/noradrenaline cellular mechanism. These concepts would revolutionise advances in pharmacotherapeutic research, allowing the selective design of specific molecules to target medical ailments rather than rely upon traditional research into the efficacy of pre-existing herbal medicines.
Categories
The mechanism of adrenoreceptors. Adrenaline or noradrenaline are receptor ligands to either α1, α2 or β-adrenoreceptors. The α1 couples to Gq, which results in increased intracellular Ca2+ and subsequent smooth muscle contraction. The α2, on the other hand, couples to Gi, which causes a decrease in neurotransmitter release, as well as a decrease of cAMP activity resulting in smooth muscle contraction. The β receptor couples to Gs and increases intracellular cAMP activity, resulting in e.g. heart muscle contraction, smooth muscle relaxation and glycogenolysis.
There are two main groups of adrenoreceptors, α and β, with 9 subtypes in total:
α receptors are subdivided into α1 (a Gq coupled receptor) and α2 (a Gi coupled receptor)
α1 has 3 subtypes: α1A, α1B and α1D
α2 has 3 subtypes: α2A, α2B and α2C
β receptors are subdivided into β1, β2 and β3. All 3 are coupled to Gs proteins, but β2 and β3 also couple to Gi
Gi and Gs are linked to adenylyl cyclase. Agonist binding thus causes a rise in the intracellular concentration of the second messenger (Gi inhibits the production of cAMP) cAMP. Downstream effectors of cAMP include cAMP-dependent protein kinase (PKA), which mediates some of the intracellular events following hormone binding.
Roles in circulation
Epinephrine (adrenaline) reacts with both α- and β-adrenoreceptors, causing vasoconstriction and vasodilation, respectively. Although α receptors are less sensitive to epinephrine, when activated at pharmacologic doses, they override the vasodilation mediated by β-adrenoreceptors because there are more peripheral α1 receptors than β-adrenoreceptors. The result is that high levels of circulating epinephrine cause vasoconstriction. However, the opposite is true in the coronary arteries, where β2 response is greater than that of α1, resulting in overall dilation with increased sympathetic stimulation. At lower levels of circulating epinephrine (physiologic epinephrine secretion), β-adrenoreceptor stimulation dominates since epinephrine has a higher affinity for the β2 adrenoreceptor than the α1 adrenoreceptor, producing vasodilation followed by decrease of peripheral vascular resistance.
Subtypes
Smooth muscle behavior is variable depending on anatomical location. Smooth muscle contraction/relaxation is generalized below. One important note is the differential effects of increased cAMP in smooth muscle compared to cardiac muscle. Increased cAMP will promote relaxation in smooth muscle, while promoting increased contractility and pulse rate in cardiac muscle.
α receptors
α receptors have actions in common, but also individual effects. Common (or still receptor unspecified) actions include:
vasoconstriction
decreased flow of smooth muscle in gastrointestinal tract
Subtype unspecific α agonists (see actions above) can be used to treat rhinitis (they decrease mucus secretion). Subtype unspecific α antagonists can be used to treat pheochromocytoma (they decrease vasoconstriction caused by norepinephrine).
α1 receptor
α1-adrenoreceptors are members of the Gq protein-coupled receptor superfamily. Upon activation, a heterotrimeric G protein, Gq, activates phospholipase C (PLC). The PLC cleaves phosphatidylinositol 4,5-bisphosphate (PIP2), which in turn causes an increase in inositol triphosphate (IP3) and diacylglycerol (DAG). The former interacts with calcium channels of endoplasmic and sarcoplasmic reticulum, thus changing the calcium content in a cell. This triggers all other effects, including a prominent slow after depolarizing current (sADP) in neurons.
Actions of the α1 receptor mainly involve smooth muscle contraction. It causes vasoconstriction in many blood vessels, including those of the skin, gastrointestinal system, kidney (renal artery) and brain. Other areas of smooth muscle contraction are:
ureter
vas deferens
hair (arrector pili muscles)
uterus (when pregnant)
urethral sphincter
urothelium and lamina propria
bronchioles (although minor relative to the relaxing effect of β2 receptor on bronchioles)
blood vessels of ciliary body and (stimulation of dilator pupillae muscles of iris causes mydriasis)
Actions also include glycogenolysis and gluconeogenesis from adipose tissue and liver; secretion from sweat glands and Na+ reabsorption from kidney.
α1 antagonists can be used to treat:
hypertension – decrease blood pressure by decreasing peripheral vasoconstriction
benign prostate hyperplasia – relax smooth muscles within the prostate thus easing urination
α2 receptor
The α2 receptor couples to the Gi/o protein. It is a presynaptic receptor, causing negative feedback on, for example, norepinephrine (NE). When NE is released into the synapse, it feeds back on the α2 receptor, causing less NE release from the presynaptic neuron. This decreases the effect of NE. There are also α2 receptors on the nerve terminal membrane of the post-synaptic adrenergic neuron.
Actions of the α2 receptor include:
decreased insulin release from the pancreas
increased glucagon release from the pancreas
contraction of sphincters of the GI-tract
negative feedback in the neuronal synapses - presynaptic inhibition of norepinephrine release in CNS
increased platelet aggregation
decreases peripheral vascular resistance
α2 agonists (see actions above) can be used to treat:
hypertension – decrease blood pressure-raising actions of the sympathetic nervous system
α2 antagonists can be used to treat:
impotence – relax penile smooth muscles and ease blood flow
depression – enhance mood by increasing norepinephrine secretion
β receptors
Subtype unspecific β agonists can be used to treat:
heart failure – increase cardiac output acutely in an emergency
circulatory shock – increase cardiac output thus redistributing blood volume
anaphylaxis – bronchodilation
Subtype unspecific β antagonists (beta blockers) can be used to treat:
heart arrhythmia – decrease the output of sinus node thus stabilizing heart function
coronary artery disease – reduce heart rate and hence increasing oxygen supply
heart failure – prevent sudden death related to this condition, which is often caused by ischemias or arrhythmias
hyperthyroidism – reduce peripheral sympathetic hyper-responsiveness
migraine – reduce number of attacks
stage fright – reduce tachycardia and tremor
glaucoma – reduce intraocular pressure
β1 receptor
Actions of the β1 receptor include:
increase cardiac output by increasing heart rate (positive chronotropic effect), conduction velocity (positive dromotropic effect), stroke volume (by enhancing contractility – positive inotropic effect), and rate of relaxation of the myocardium, by increasing calcium ion sequestration rate (positive lusitropic effect), which aids in increasing heart rate
increase renin secretion from juxtaglomerular cells of the kidney
increase ghrelin secretion from the stomach
β2 receptor
Actions of the β2 receptor include:
smooth muscle relaxation throughout many areas of the body, e.g. in bronchi (bronchodilation, see salbutamol), GI tract (decreased motility), veins (vasodilation of blood vessels), especially those to skeletal muscle (although this vasodilator effect of norepinephrine is relatively minor and overwhelmed by α adrenoceptor-mediated vasoconstriction)
lipolysis in adipose tissue
anabolism in skeletal muscle
uptake of potassium into cells
relax non-pregnant uterus
relax detrusor urinae muscle of bladder wall
dilate arteries to skeletal muscle
glycogenolysis and gluconeogenesis
stimulates insulin secretion
contract sphincters of GI tract
thickened secretions from salivary glands
inhibit histamine-release from mast cells
involved in brain - immune communication
β2 agonists (see actions above) can be used to treat:
asthma and COPD – reduce bronchial smooth muscle contraction thus dilating the bronchus
hyperkalemia – increase cellular potassium intake
preterm birth – reduce uterine smooth muscle contractions
β3 receptor
Actions of the β3 receptor include:
increase of lipolysis in adipose tissue
relax the bladder
β3 agonists could theoretically be used as weight-loss drugs, but are limited by the side effect of tremors.
See also
Beta adrenergic receptor kinase
Beta adrenergic receptor kinase-2
Acetylcholine receptor (Cholinergic receptor)
Nicotinic acetylcholine receptor
Muscarinic acetylcholine receptor
Notes
References
Further reading
External links
Alpha receptors illustrated
The Adrenergic Receptors
Adrenoceptors - IUPHAR/BPS guide to pharmacology
Basic Neurochemistry: α- and β-Adrenergic Receptors
Theory of receptor activation
Desensitization of β1 receptors
G protein-coupled receptors | Adrenergic receptor | [
"Chemistry"
] | 2,855 | [
"G protein-coupled receptors",
"Signal transduction"
] |
241,026 | https://en.wikipedia.org/wiki/Up%20quark | The up quark or u quark (symbol: u) is the lightest of all quarks, a type of elementary particle, and a significant constituent of matter. It, along with the down quark, forms the neutrons (one up quark, two down quarks) and protons (two up quarks, one down quark) of atomic nuclei. It is part of the first generation of matter, has an electric charge of + e and a bare mass of . Like all quarks, the up quark is an elementary fermion with spin , and experiences all four fundamental interactions: gravitation, electromagnetism, weak interactions, and strong interactions. The antiparticle of the up quark is the up antiquark (sometimes called antiup quark or simply antiup), which differs from it only in that some of its properties, such as charge have equal magnitude but opposite sign.
Its existence (along with that of the down and strange quarks) was postulated in 1964 by Murray Gell-Mann and George Zweig to explain the Eightfold Way classification scheme of hadrons. The up quark was first observed by experiments at the Stanford Linear Accelerator Center in 1968.
History
In the beginnings of particle physics (first half of the 20th century), hadrons such as protons, neutrons and pions were thought to be elementary particles. However, as new hadrons were discovered, the 'particle zoo' grew from a few particles in the early 1930s and 1940s to several dozens of them in the 1950s. The relationships between each of them were unclear until 1961, when Murray Gell-Mann and Yuval Ne'eman (independently of each other) proposed a hadron classification scheme called the Eightfold Way, or in more technical terms, SU(3) flavor symmetry.
This classification scheme organized the hadrons into isospin multiplets, but the physical basis behind it was still unclear. In 1964, Gell-Mann and George Zweig (independently of each other) proposed the quark model, then consisting only of up, down, and strange quarks. However, while the quark model explained the Eightfold Way, no direct evidence of the existence of quarks was found until 1968 at the Stanford Linear Accelerator Center. Deep inelastic scattering experiments indicated that protons had substructure, and that protons made of three more-fundamental particles explained the data (thus confirming the quark model).
At first people were reluctant to describe the three bodies as quarks, instead preferring Richard Feynman's parton description, but over time the quark theory became accepted (see November Revolution).
Mass
Despite being extremely common, the bare mass of the up quark is not well determined, but probably lies between 1.8 and . Lattice QCD calculations give a more precise value: .
When found in mesons (particles made of one quark and one antiquark) or baryons (particles made of three quarks), the 'effective mass' (or 'dressed' mass) of quarks becomes greater because of the binding energy caused by the gluon field between each quark (see mass–energy equivalence). The bare mass of up quarks is so light, it cannot be straightforwardly calculated because relativistic effects have to be taken into account.
See also
Down quark
Isospin
Quark model
Quantum Mechanics
References
Further reading
Quarks
Elementary particles | Up quark | [
"Physics"
] | 723 | [
"Elementary particles",
"Subatomic particles",
"Matter"
] |
241,029 | https://en.wikipedia.org/wiki/Top%20quark | The top quark, sometimes also referred to as the truth quark, (symbol: t) is the most massive of all observed elementary particles. It derives its mass from its coupling to the Higgs field. This coupling is very close to unity; in the Standard Model of particle physics, it is the largest (strongest) coupling at the scale of the weak interactions and above. The top quark was discovered in 1995 by the CDF and DØ experiments at Fermilab.
Like all other quarks, the top quark is a fermion with spin-1/2 and participates in all four fundamental interactions: gravitation, electromagnetism, weak interactions, and strong interactions. It has an electric charge of + e. It has a mass of , which is close to the rhenium atom mass. The antiparticle of the top quark is the top antiquark (symbol: , sometimes called antitop quark or simply antitop), which differs from it only in that some of its properties have equal magnitude but opposite sign.
The top quark interacts with gluons of the strong interaction and is typically produced in hadron colliders via this interaction. However, once produced, the top (or antitop) can decay only through the weak force. It decays to a W boson and either a bottom quark (most frequently), a strange quark, or, on the rarest of occasions, a down quark.
The Standard Model determines the top quark's mean lifetime to be roughly . This is about a twentieth of the timescale for strong interactions, and therefore it does not form hadrons, giving physicists a unique opportunity to study a "bare" quark (all other quarks hadronize, meaning that they combine with other quarks to form hadrons and can only be observed as such).
Because the top quark is so massive, its properties allowed indirect determination of the mass of the Higgs boson (see below). As such, the top quark's properties are extensively studied as a means to discriminate between competing theories of new physics beyond the Standard Model. The top quark is the only quark that has been directly observed due to its decay time being shorter than the hadronization time.
History
In 1973, Makoto Kobayashi and Toshihide Maskawa predicted the existence of a third generation of quarks to explain observed CP violations in kaon decay. The names top and bottom were introduced by Haim Harari in 1975, to match the names of the first generation of quarks (up and down) reflecting the fact that the two were the "up" and "down" component of a weak isospin doublet.
The proposal of Kobayashi and Maskawa heavily relied on the GIM mechanism put forward by Sheldon Glashow, John Iliopoulos and Luciano Maiani, which predicted the existence of the then still unobserved charm quark. (Direct evidence for the existence of quarks, including the other second generation quark, the strange quark, was obtained in 1968; strange particles were discovered back in 1947.) When in November 1974 teams at Brookhaven National Laboratory (BNL) and the Stanford Linear Accelerator Center (SLAC) simultaneously announced the discovery of the J/ψ meson, it was soon after identified as a bound state of the missing charm quark with its antiquark. This discovery allowed the GIM mechanism to become part of the Standard Model. With the acceptance of the GIM mechanism, Kobayashi and Maskawa's prediction also gained in credibility. Their case was further strengthened by the discovery of the tau by Martin Lewis Perl's team at SLAC between 1974 and 1978. The tau announced a third generation of leptons, breaking the new symmetry between leptons and quarks introduced by the GIM mechanism. Restoration of the symmetry implied the existence of a fifth and sixth quark.
It was in fact not long until a fifth quark, the bottom, was discovered by the E288 experiment team, led by Leon Lederman at Fermilab in 1977. This strongly suggested that there must also be a sixth quark, the top, to complete the pair. It was known that this quark would be heavier than the bottom, requiring more energy to create in particle collisions, but the general expectation was that the sixth quark would soon be found. However, it took another 18 years before the existence of the top was confirmed.
Early searches for the top quark at SLAC and DESY (in Hamburg) came up empty-handed. When, in the early 1980s, the Super Proton Synchrotron (SPS) at CERN discovered the W boson and the Z boson, it was again felt that the discovery of the top was imminent. As the SPS gained competition from the Tevatron at Fermilab there was still no sign of the missing particle, and it was announced by the group at CERN that the top mass must be at least . After a race between CERN and Fermilab to discover the top, the accelerator at CERN reached its limits without creating a single top, pushing the lower bound on its mass up to .
The Tevatron was (until the start of LHC operation at CERN in 2009) the only hadron collider powerful enough to produce top quarks. In order to be able to confirm a future discovery, a second detector, the DØ detector, was added to the complex (in addition to the Collider Detector at Fermilab (CDF) already present). In October 1992, the two groups found their first hint of the top, with a single creation event that appeared to contain the top. In the following years, more evidence was collected and on 22 April 1994, the CDF group submitted their article presenting tentative evidence for the existence of a top quark with a mass of about . In the meantime, DØ had found no more evidence than the suggestive event in 1992. A year later, on 2 March 1995, after having gathered more evidence and reanalyzed the DØ data (which had been searched for a much lighter top), the two groups jointly reported the discovery of the top at a mass of .
In the years leading up to the top-quark discovery, it was realized that certain precision measurements of the electroweak vector boson masses and couplings are very sensitive to the value of the top-quark mass. These effects become much larger for higher values of the top mass and therefore could indirectly see the top quark even if it could not be directly detected in any experiment at the time. The largest effect from the top-quark mass was on the T parameter, and by 1994 the precision of these indirect measurements had led to a prediction of the top-quark mass to be between and . It is the development of techniques that ultimately allowed such precision calculations that led to Gerardus 't Hooft and Martinus Veltman winning the Nobel Prize in physics in 1999.
Properties
At the final Tevatron energy of 1.96 TeV, top–antitop pairs were produced with a cross section of about 7 picobarns (pb). The Standard Model prediction (at next-to-leading order with = ) is 6.7–7.5 pb.
The W bosons from top quark decays carry polarization from the parent particle, hence pose themselves as a unique probe to top polarization.
In the Standard Model, the top quark is predicted to have a spin quantum number of and electric charge . A first measurement of the top quark charge has been published, resulting in some confidence that the top quark charge is indeed .
Production
Because top quarks are very massive, large amounts of energy are needed to create one. The only way to achieve such high energies is through high-energy collisions. These occur naturally in the Earth's upper atmosphere as cosmic rays collide with particles in the air, or can be created in a particle accelerator. In 2011, after the Tevatron ceased operations, the Large Hadron Collider at CERN became the only accelerator that generates a beam of sufficient energy to produce top quarks, with a center-of-mass energy of 7 TeV. There are multiple processes that can lead to the production of top quarks, but they can be conceptually divided in two categories: top-pair production, and single-top production.
Top-quark pairs
The most common is production of a top–antitop pair via strong interactions. In a collision, a highly energetic gluon is created, which subsequently decays into a top and antitop. This process was responsible for the majority of the top events at Tevatron and was the process observed when the top was first discovered in 1995. It is also possible to produce pairs of top–antitop through the decay of an intermediate photon or Z-boson. However, these processes are predicted to be much rarer and have a virtually identical experimental signature in a hadron collider like Tevatron.
Single top quarks
The production of single top quarks via weak interaction is a distinctly different process. This can happen in several ways (called channels): Either an intermediate W-boson decays into a top and antibottom quarks ("s-channel") or a bottom quark (probably created in a pair through the decay of a gluon) transforms to a top quark by exchanging a W boson with an up or down quark ("t-channel"). A single top quark can also be produced in association with a W boson, requiring an initial-state bottom quark ("tW-channel"). The first evidence for these processes was published by the DØ collaboration in December 2006, and in March 2009 the CDF and DØ collaborations released twin articles with the definitive observation of these processes. The main significance of measuring these production processes is that their frequency is directly proportional to the component of the CKM matrix.
Decay
The only known way the top quark can decay is through the weak interaction, producing a W boson and a bottom quark.
Because of its enormous mass, the top quark is extremely short-lived, with a predicted lifetime of only . As a result, top quarks do not have time before they decay to form hadrons as other quarks do.
The absence of a hadron surrounding the top quark provides physicists with the unique opportunity to study the behavior of a "bare" quark.
In particular, it is possible to directly determine the branching ratio:
The best current determination of this ratio is . Since this ratio is equal to according to the Standard Model, this gives another way of determining the CKM element , or in combination with the determination of from single top production provides tests for the assumption that the CKM matrix is unitary.
The Standard Model also allows more exotic decays, but only at one loop level, meaning that they are extremely rare. In particular, it is conceivable that a top quark might decay into another up-type quark (an up or a charm) by emitting a photon or a Z-boson. However, searches for these exotic decay modes have produced no evidence that they occur, in accordance with expectations of the Standard Model. The branching ratios for these decays have been determined to be less than 1.8 in 10000 for photonic decay and less than 5 in 10000 for Z boson decay at 95% confidence.
Mass and coupling to the Higgs boson
The Standard Model generates fermion masses through their couplings to the Higgs boson. This Higgs boson acts as a field that fills space. Fermions interact with this field in proportion to their individual coupling constants , which generates mass. A low-mass particle, such as the electron has a minuscule coupling , while the top quark has the largest coupling to the Higgs, .
In the Standard Model, all of the quark and lepton Higgs–Yukawa couplings are small compared to the top-quark Yukawa coupling. This hierarchy in the fermion masses remains a profound and open problem in theoretical physics. Higgs–Yukawa couplings are not fixed constants of nature, as their values vary slowly as the energy scale (distance scale) at which they are measured. These dynamics of Higgs–Yukawa couplings, called "running coupling constants", are due to a quantum effect called the renormalization group.
The Higgs–Yukawa couplings of the up, down, charm, strange and bottom quarks are hypothesized to have small values at the extremely high energy scale of grand unification, . They increase in value at lower energy scales, at which the quark masses are generated by the Higgs. The slight growth is due to corrections from the QCD coupling. The corrections from the Yukawa couplings are negligible for the lower-mass quarks.
One of the prevailing views in particle physics is that the size of the top-quark Higgs–Yukawa coupling is determined by a unique nonlinear property of the renormalization group equation that describes the running of the large Higgs–Yukawa coupling of the top quark. If a quark Higgs–Yukawa coupling has a large value at very high energies, its Yukawa corrections will evolve downward in mass scale and cancel against the QCD corrections. This is known as a (quasi-) infrared fixed point, which was first predicted by B. Pendleton and G.G. Ross, and by Christopher T. Hill, No matter what the initial starting value of the coupling is, if sufficiently large, it will reach this fixed-point value. The corresponding quark mass is then predicted.
The top-quark Yukawa coupling lies very near the infrared fixed point of the Standard Model. The renormalization group equation is:
where is the color gauge coupling, is the weak isospin gauge coupling, and is the weak hypercharge gauge coupling. This equation describes how the Yukawa coupling changes with energy scale . Solutions to this equation for large initial values cause the right-hand side of the equation to quickly approach zero, locking to the QCD coupling .
The value of the top quark fixed point is fairly precisely determined in the Standard Model, leading to a top-quark mass of 220 GeV. This is about 25% larger than the observed top mass and may be hinting at new physics at higher energy scales.
The quasi-infrared fixed point subsequently became the basis of top quark condensation and topcolor theories of electroweak symmetry breaking, in which the Higgs boson is composed of a pair of top and antitop quarks. The predicted top-quark mass comes into improved agreement with the fixed point if there are additional Higgs scalars beyond the standard model and therefore it may be hinting at a rich spectroscopy of new Higgs fields at energy scales that can be probed with the LHC and its upgrades.
See also
CDF experiment
Quark model
Top quark condensate
Topcolor
Topness
Footnotes
References
Further reading
External links
Elementary particles
Quarks
Standard Model | Top quark | [
"Physics"
] | 3,157 | [
"Standard Model",
"Matter",
"Elementary particles",
"Particle physics",
"Subatomic particles"
] |
241,223 | https://en.wikipedia.org/wiki/Poisson%27s%20ratio | In materials science and solid mechanics, Poisson's ratio (symbol: (nu)) is a measure of the Poisson effect, the deformation (expansion or contraction) of a material in directions perpendicular to the specific direction of loading. The value of Poisson's ratio is the negative of the ratio of transverse strain to axial strain. For small values of these changes, is the amount of transversal elongation divided by the amount of axial compression. Most materials have Poisson's ratio values ranging between 0.0 and 0.5. For soft materials, such as rubber, where the bulk modulus is much higher than the shear modulus, Poisson's ratio is near 0.5. For open-cell polymer foams, Poisson's ratio is near zero, since the cells tend to collapse in compression. Many typical solids have Poisson's ratios in the range of 0.2 to 0.3. The ratio is named after the French mathematician and physicist Siméon Poisson.
Origin
Poisson's ratio is a measure of the Poisson effect, the phenomenon in which a material tends to expand in directions perpendicular to the direction of compression. Conversely, if the material is stretched rather than compressed, it usually tends to contract in the directions transverse to the direction of stretching. It is a common observation when a rubber band is stretched, it becomes noticeably thinner. Again, the Poisson ratio will be the ratio of relative contraction to relative expansion and will have the same value as above. In certain rare cases, a material will actually shrink in the transverse direction when compressed (or expand when stretched) which will yield a negative value of the Poisson ratio.
The Poisson's ratio of a stable, isotropic, linear elastic material must be between −1.0 and +0.5 because of the requirement for Young's modulus, the shear modulus and bulk modulus to have positive values. Most materials have Poisson's ratio values ranging between 0.0 and 0.5. A perfectly incompressible isotropic material deformed elastically at small strains would have a Poisson's ratio of exactly 0.5. Most steels and rigid polymers when used within their design limits (before yield) exhibit values of about 0.3, increasing to 0.5 for post-yield deformation which occurs largely at constant volume. Rubber has a Poisson ratio of nearly 0.5. Cork's Poisson ratio is close to 0, showing very little lateral expansion when compressed and glass is between 0.18 and 0.30. Some materials, e.g. some polymer foams, origami folds, and certain cells can exhibit negative Poisson's ratio, and are referred to as auxetic materials. If these auxetic materials are stretched in one direction, they become thicker in the perpendicular direction. In contrast, some anisotropic materials, such as carbon nanotubes, zigzag-based folded sheet materials, and honeycomb auxetic metamaterials to name a few, can exhibit one or more Poisson's ratios above 0.5 in certain directions.
Assuming that the material is stretched or compressed in only one direction (the axis in the diagram below):
where
is the resulting Poisson's ratio,
is transverse strain
is axial strain
and positive strain indicates extension and negative strain indicates contraction.
Poisson's ratio from geometry changes
Length change
For a cube stretched in the -direction (see Figure 1) with a length increase of in the -direction, and a length decrease of in the - and -directions, the infinitesimal diagonal strains are given by
If Poisson's ratio is constant through deformation, integrating these expressions and using the definition of Poisson's ratio gives
Solving and exponentiating, the relationship between and is then
For very small values of and , the first-order approximation yields:
Volumetric change
The relative change of volume of a cube due to the stretch of the material can now be calculated. Since and
one can derive
Using the above derived relationship between and :
and for very small values of and , the first-order approximation yields:
For isotropic materials we can use Lamé's relation
where is bulk modulus and is Young's modulus.
Width change
If a rod with diameter (or width, or thickness) and length is subject to tension so that its length will change by then its diameter will change by:
The above formula is true only in the case of small deformations; if deformations are large then the following (more precise) formula can be used:
where
is original diameter
is rod diameter change
is Poisson's ratio
is original length, before stretch
is the change of length.
The value is negative because it decreases with increase of length
Characteristic materials
Isotropic
For a linear isotropic material subjected only to compressive (i.e. normal) forces, the deformation of a material in the direction of one axis will produce a deformation of the material along the other axis in three dimensions. Thus it is possible to generalize Hooke's Law (for compressive forces) into three dimensions:
where:
, , and are strain in the direction of , and
, , and are stress in the direction of , and
is Young's modulus (the same in all directions for isotropic materials)
is Poisson's ratio (the same in all directions for isotropic materials)
these equations can be all synthesized in the following:
In the most general case, also shear stresses will hold as well as normal stresses, and the full generalization of Hooke's law is given by:
where is the Kronecker delta. The Einstein notation is usually adopted:
to write the equation simply as:
Anisotropic
For anisotropic materials, the Poisson ratio depends on the direction of extension and transverse deformation
Here is Poisson's ratio, is Young's modulus, is a unit vector directed along the direction of extension, is a unit vector directed perpendicular to the direction of extension. Poisson's ratio has a different number of special directions depending on the type of anisotropy.
Orthotropic
Orthotropic materials have three mutually perpendicular planes of symmetry in their material properties. An example is wood, which is most stiff (and strong) along the grain, and less so in the other directions.
Then Hooke's law can be expressed in matrix form as
where
is the Young's modulus along axis
is the shear modulus in direction on the plane whose normal is in direction
is the Poisson ratio that corresponds to a contraction in direction when an extension is applied in direction .
The Poisson ratio of an orthotropic material is different in each direction (, and ). However, the symmetry of the stress and strain tensors implies that not all the six Poisson's ratios in the equation are independent. There are only nine independent material properties: three elastic moduli, three shear moduli, and three Poisson's ratios. The remaining three Poisson's ratios can be obtained from the relations
From the above relations we can see that if then . The larger ratio (in this case ) is called the major Poisson ratio while the smaller one (in this case ) is called the minor Poisson ratio. We can find similar relations between the other Poisson ratios.
Transversely isotropic
Transversely isotropic materials have a plane of isotropy in which the elastic properties are isotropic. If we assume that this plane of isotropy is the -plane, then Hooke's law takes the form
where we have used the -plane of isotropy to reduce the number of constants, that is,
.
The symmetry of the stress and strain tensors implies that
This leaves us with six independent constants , , , , , . However, transverse isotropy gives rise to a further constraint between and , which is
Therefore, there are five independent elastic material properties two of which are Poisson's ratios. For the assumed plane of symmetry, the larger of and is the major Poisson ratio. The other major and minor Poisson ratios are equal.
Poisson's ratio values for different materials
{| class="wikitable sortable" style="border-collapse: collapse"
|- bgcolor="#cccccc"
! Material
! Poisson's ratio
|-
| rubber
| 0.4999
|-
| gold
| 0.42–0.44
|-
| saturated clay
| 0.40–0.49
|-
| magnesium
| 0.252–0.289
|-
| titanium
| 0.265–0.34
|-
| copper
| 0.33
|-
| aluminium alloy
| 0.32
|-
| clay
| 0.30–0.45
|-
| stainless steel
| 0.30–0.31
|-
| steel
| 0.27–0.30
|-
| cast iron
| 0.21–0.26
|-
| sand
| 0.20–0.455
|-
| concrete
| 0.1–0.2
|-
| glass
| 0.18–0.3
|-
| metallic glasses
| 0.276–0.409
|-
| foam
| 0.10–0.50
|-
| cork
| 0.0
|}
{| class="wikitable sortable" style="border-collapse: collapse"
|- bgcolor="#cccccc"
!Material!!Plane of symmetry!!!!!!!!!!!!
|-
| Nomex honeycomb core
| , ribbon in direction
|0.49
|0.69
|0.01
|2.75
|3.88
|0.01
|-
|glass fiber epoxy resin
|
|0.29
|0.32
|0.06
|0.06
|0.32
|}
Negative Poisson's ratio materials
Some materials known as auxetic materials display a negative Poisson's ratio. When subjected to positive strain in a longitudinal axis, the transverse strain in the material will actually be positive (i.e. it would increase the cross sectional area). For these materials, it is usually due to uniquely oriented, hinged molecular bonds. In order for these bonds to stretch in the longitudinal direction, the hinges must ‘open’ in the transverse direction, effectively exhibiting a positive strain.
This can also be done in a structured way and lead to new aspects in material design as for mechanical metamaterials.
Studies have shown that certain solid wood types display negative Poisson's ratio exclusively during a compression creep test. Initially, the compression creep test shows positive Poisson's ratios, but gradually decreases until it reaches negative values. Consequently, this also shows that Poisson's ratio for wood is time-dependent during constant loading, meaning that the strain in the axial and transverse direction do not increase in the same rate.
Media with engineered microstructure may exhibit negative Poisson's ratio. In a simple case auxeticity is obtained removing material and creating a periodic porous media. Lattices can reach lower values of Poisson's ratio, which can be indefinitely close to the limiting value −1 in the isotropic case.
More than three hundred crystalline materials have negative Poisson's ratio. For example, Li, Na, K, Cu, Rb, Ag, Fe, Ni, Co, Cs, Au, Be, Ca, Zn Sr, Sb, MoS2 and others.
Poisson function
At finite strains, the relationship between the transverse and axial strains and is typically not well described by the Poisson ratio. In fact, the Poisson ratio is often considered a function of the applied strain in the large strain regime. In such instances, the Poisson ratio is replaced by the Poisson function, for which there are several competing definitions. Defining the transverse stretch and axial stretch , where the transverse stretch is a function of the axial stretch, the most common are the Hencky, Biot, Green, and Almansi functions:
Applications of Poisson's effect
One area in which Poisson's effect has a considerable influence is in pressurized pipe flow. When the air or liquid inside a pipe is highly pressurized it exerts a uniform force on the inside of the pipe, resulting in a hoop stress within the pipe material. Due to Poisson's effect, this hoop stress will cause the pipe to increase in diameter and slightly decrease in length. The decrease in length, in particular, can have a noticeable effect upon the pipe joints, as the effect will accumulate for each section of pipe joined in series. A restrained joint may be pulled apart or otherwise prone to failure.
Another area of application for Poisson's effect is in the realm of structural geology. Rocks, like most materials, are subject to Poisson's effect while under stress. In a geological timescale, excessive erosion or sedimentation of Earth's crust can either create or remove large vertical stresses upon the underlying rock. This rock will expand or contract in the vertical direction as a direct result of the applied stress, and it will also deform in the horizontal direction as a result of Poisson's effect. This change in strain in the horizontal direction can affect or form joints and dormant stresses in the rock.
Although cork was historically chosen to seal wine bottle for other reasons (including its inert nature, impermeability, flexibility, sealing ability, and resilience), cork's Poisson's ratio of zero provides another advantage. As the cork is inserted into the bottle, the upper part which is not yet inserted does not expand in diameter as it is compressed axially. The force needed to insert a cork into a bottle arises only from the friction between the cork and the bottle due to the radial compression of the cork. If the stopper were made of rubber, for example, (with a Poisson's ratio of about +0.5), there would be a relatively large additional force required to overcome the radial expansion of the upper part of the rubber stopper.
Most car mechanics are aware that it is hard to pull a rubber hose (such as a coolant hose) off a metal pipe stub, as the tension of pulling causes the diameter of the hose to shrink, gripping the stub tightly. (This is the same effect as shown in a Chinese finger trap.) Hoses can more easily be pushed off stubs instead using a wide flat blade.
See also
Linear elasticity
Hooke's law
Impulse excitation technique
Orthotropic material
Shear modulus
Young's modulus
Coefficient of thermal expansion
References
External links
Meaning of Poisson's ratio
Negative Poisson's ratio materials
More on negative Poisson's ratio materials (auxetic)
Elasticity (physics)
Mechanical quantities
Dimensionless numbers of physics
Materials science
Ratios
Solid mechanics | Poisson's ratio | [
"Physics",
"Materials_science",
"Mathematics",
"Engineering"
] | 3,072 | [
"Physical phenomena",
"Solid mechanics",
"Applied and interdisciplinary physics",
"Physical quantities",
"Mechanical quantities",
"Elasticity (physics)",
"Deformation (mechanics)",
"Quantity",
"Materials science",
"Arithmetic",
"Mechanics",
"nan",
"Physical properties",
"Ratios"
] |
241,431 | https://en.wikipedia.org/wiki/Chirp%20Z-transform | The chirp Z-transform (CZT) is a generalization of the discrete Fourier transform (DFT). While the DFT samples the Z plane at uniformly-spaced points along the unit circle, the chirp Z-transform samples along spiral arcs in the Z-plane, corresponding to straight lines in the S plane. The DFT, real DFT, and zoom DFT can be calculated as special cases of the CZT.
Specifically, the chirp Z transform calculates the Z transform at a finite number of points zk along a logarithmic spiral contour, defined as:
where A is the complex starting point, W is the complex ratio between points, and M is the number of points to calculate.
Like the DFT, the chirp Z-transform can be computed in O(n log n) operations where .
An O(N log N) algorithm for the inverse chirp Z-transform (ICZT) was described in 2003, and in 2019.
Bluestein's algorithm
Bluestein's algorithm expresses the CZT as a convolution and implements it efficiently using FFT/IFFT.
As the DFT is a special case of the CZT, this allows the efficient calculation of discrete Fourier transform (DFT) of arbitrary sizes, including prime sizes. (The other algorithm for FFTs of prime sizes, Rader's algorithm, also works by rewriting the DFT as a convolution.) It was conceived in 1968 by Leo Bluestein. Bluestein's algorithm can be used to compute more general transforms than the DFT, based on the (unilateral) z-transform (Rabiner et al., 1969).
Recall that the DFT is defined by the formula
If we replace the product nk in the exponent by the identity
we thus obtain:
This summation is precisely a convolution of the two sequences an and bn defined by:
with the output of the convolution multiplied by N phase factors bk*. That is:
This convolution, in turn, can be performed with a pair of FFTs (plus the pre-computed FFT of complex chirp bn) via the convolution theorem. The key point is that these FFTs are not of the same length N: such a convolution can be computed exactly from FFTs only by zero-padding it to a length greater than or equal to 2N–1. In particular, one can pad to a power of two or some other highly composite size, for which the FFT can be efficiently performed by e.g. the Cooley–Tukey algorithm in O(N log N) time. Thus, Bluestein's algorithm provides an O(N log N) way to compute prime-size DFTs, albeit several times slower than the Cooley–Tukey algorithm for composite sizes.
The use of zero-padding for the convolution in Bluestein's algorithm deserves some additional comment. Suppose we zero-pad to a length M ≥ 2N–1. This means that an is extended to an array An of length M, where An = an for 0 ≤ n < N and An = 0 otherwise—the usual meaning of "zero-padding". However, because of the bk–n term in the convolution, both positive and negative values of n are required for bn (noting that b–n = bn). The periodic boundaries implied by the DFT of the zero-padded array mean that –n is equivalent to M–n. Thus, bn is extended to an array Bn of length M, where B0 = b0, Bn = BM–n = bn for 0 < n < N, and Bn = 0 otherwise. A and B are then FFTed, multiplied pointwise, and inverse FFTed to obtain the convolution of a and b, according to the usual convolution theorem.
Let us also be more precise about what type of convolution is required in Bluestein's algorithm for the DFT. If the sequence bn were periodic in n with period N, then it would be a cyclic convolution of length N, and the zero-padding would be for computational convenience only. However, this is not generally the case:
Therefore, for N even the convolution is cyclic, but in this case N is composite and one would normally use a more efficient FFT algorithm such as Cooley–Tukey. For N odd, however, then bn is antiperiodic and we technically have a negacyclic convolution of length N. Such distinctions disappear when one zero-pads an to a length of at least 2N−1 as described above, however. It is perhaps easiest, therefore, to think of it as a subset of the outputs of a simple linear convolution (i.e. no conceptual "extensions" of the data, periodic or otherwise).
z-transforms
Bluestein's algorithm can also be used to compute a more general transform based on the (unilateral) z-transform (Rabiner et al., 1969). In particular, it can compute any transform of the form:
for an arbitrary complex number z and for differing numbers N and M of inputs and outputs. Given Bluestein's algorithm, such a transform can be used, for example, to obtain a more finely spaced interpolation of some portion of the spectrum (although the frequency resolution is still limited by the total sampling time, similar to a Zoom FFT), enhance arbitrary poles in transfer-function analyses, etc.
The algorithm was dubbed the chirp z-transform algorithm because, for the Fourier-transform case (|z| = 1), the sequence bn from above is a complex sinusoid of linearly increasing frequency, which is called a (linear) chirp in radar systems.
See also
Fractional Fourier transform
References
General
Leo I. Bluestein, "A linear filtering approach to the computation of the discrete Fourier transform," Northeast Electronics Research and Engineering Meeting Record 10, 218-219 (1968).
Lawrence R. Rabiner, Ronald W. Schafer, and Charles M. Rader, "The chirp z-transform algorithm and its application," Bell Syst. Tech. J. 48, 1249-1292 (1969). Also published in: Rabiner, Shafer, and Rader, "The chirp z-transform algorithm," IEEE Trans. Audio Electroacoustics 17 (2), 86–92 (1969).
D. H. Bailey and P. N. Swarztrauber, "The fractional Fourier transform and applications," SIAM Review 33, 389-404 (1991). (Note that this terminology for the z-transform is nonstandard: a fractional Fourier transform conventionally refers to an entirely different, continuous transform.)
Lawrence Rabiner, "The chirp z-transform algorithm—a lesson in serendipity," IEEE Signal Processing Magazine 21, 118-119 (March 2004). (Historical commentary.)
Vladimir Sukhoy and Alexander Stoytchev: "Generalizing the inverse FFT off the unit circle", (Oct 2019). # Open access.
Vladimir Sukhoy and Alexander Stoytchev: "Numerical error analysis of the ICZT algorithm for chirp contours on the unit circle", Sci Rep 10, 4852 (2020).
External links
A DSP algorithm for frequency analysis - the Chirp-Z Transform (CZT)
Solving a 50-year-old puzzle in signal processing, part two
Fourier analysis
Time–frequency analysis
Integral transforms | Chirp Z-transform | [
"Physics"
] | 1,604 | [
"Frequency-domain analysis",
"Spectrum (physical sciences)",
"Time–frequency analysis"
] |
241,510 | https://en.wikipedia.org/wiki/Phi%20phenomenon | The term phi phenomenon is used in a narrow sense for an apparent motion that is observed if two nearby optical stimuli are presented in alternation with a relatively high frequency. In contrast to beta movement, seen at lower frequencies, the stimuli themselves do not appear to move. Instead, a diffuse, amorphous shadowlike something seems to jump in front of the stimuli and occlude them temporarily. This shadow seems to have nearly the color of the background. Max Wertheimer first described this form of apparent movement in his habilitation thesis, published 1912, marking the birth of Gestalt psychology.
In a broader sense, particularly if the plural form phi phenomena is used, it applies also to all apparent movements that can be seen if two nearby optical stimuli are presented in alternation. This includes especially beta movement, which has been regarded as the illusion of motion in cinema and animation, although it can be argued that beta movement indicates long-range apparent motion rather than the short-range apparent motion seen in film. Actually, Wertheimer applied the term "φ-phenomenon" to all apparent movements described in his thesis when he introduced the term in 1912, the objectless movement he called "pure φ". Nevertheless, some commentators assert that he reserved the Greek letter φ for pure, objectless movement.
Experimental demonstration
Wertheimer's classic experiments used two light lines or curves repeatedly presented one after the other using a tachistoscope. If certain, relatively short, intervals between stimuli were used, and the distance between the stimuli was suitable, then his subjects (who happened to be his colleagues Wolfgang Köhler and Kurt Koffka) reported seeing pure "objectless" motion.
However, it turns out to be difficult to demonstrate phi stably and convincingly. To facilitate demonstrating the phenomenon, 21st-century psychologists designed a more vivid experimental arrangement using more than two stimuli. In this demonstration, called "Magni-phi," identical disks are arranged in a circle and, in a rapid sequence, one of the disks is hidden in clockwise or counter-clockwise order. This makes it easier to observe the kind of shadow-like movement Wertheimer discovered. The Magni-phi demonstration is robust to changes of parameters such as timing, size, intensity, number of disks, and viewing distance.
Furthermore, the phenomenon may be observed more reliably even with only two elements if a negative interstimulus interval (ISI) is used (that is, if the periods during which the two elements are visible overlap slightly). In that case, the viewer may see the two objects as stationary and suppose unconsciously that the reappearance of the stimulus on one side means that the object previously displayed in that position has reappeared and not, as observed with beta movement, that the object from the opposite side has just moved to a new position. The crucial factor for this perception is the shortness of discontinuity of the stimulus on each side. This is supported by the observation that two parameters have to be chosen properly to produce the pure phi phenomenon: first the absolute duration of the gap on each side must not exceed about 150 ms., and second, the duration of the gap must not exceed 40% of the stimulus period.
History of research
In his 1912 thesis, Wertheimer introduced the symbol φ (phi) in the following way:
Besides the "optimal movement" (later called beta movement) and partial movements of both objects, Wertheimer described a phenomenon he called "pure movement." Concerning this, he summarized the descriptions of his test subjects as follows:
Wertheimer attributed much importance to these observations because, in his opinion, they proved that movement could be perceived directly and was not necessarily deduced from the separate sensation of two optical stimuli in slightly different places at slightly different times. This aspect of his thesis was an important trigger in launching Gestalt psychology.
Starting in the mid-20th century, confusion arose in the scientific literature as to exactly what the phi phenomenon was. One reason could be that the anglophone scientists had difficulties understanding Wertheimer's thesis, which was published in German. Wertheimer's writing style is also idiosyncratic. Furthermore, Wertheimer's thesis does not specify precisely under which parameters "pure movement" was observed. Moreover, it is difficult to reproduce the phenomenon. Edwin Boring's influential history of the psychology of sensation and perception, first published in 1942, contributed to this confusion. Boring listed the phenomena Wertheimer had observed and sorted them by the length of the interstimulus interval. However, Boring placed the phi phenomenon in the wrong position, namely as having a relatively long inter stimulus interval. In fact, with such long intervals, subjects do not perceive movement at all; they only observe two objects appearing successively.
This confusion has probably contributed to the "rediscovery" of the phi phenomenon under other names, for example, as "omega motion," "afterimage motion," and "shadow motion."
Reverse phi illusion
As apparent phi movement is perceived by human’s visual system with two stationary and similar optical stimuli presented next to each other exposing successively with high frequency, there is also a reversed version of this motion, which is reversed phi illusion. Reverse phi illusion is the kind of phi phenomenon that fades or dissolves from its positive direction to the displaced negative, so that the apparent motion human perceive is opposite to the actual physical displacement. Reverse phi illusion is often followed by black and white patterns.
It is believed that reverse phi illusion is indeed brightness effects, that it occurs when brightness-reversing picture moving across our retina. It can be explained by mechanisms of visual receptive field model, where visual stimuli are summated spatially (a process that is reverse to spatial differentiation). This spatial summation blurs the contour to a small extent, and thus changes the brightness perceived. Four predictions are confirmed from this receptive field model. First, foveal reverse-phi should be broken down when the displacement is greater than the width of foveal receptive fields. Second, reverse phi illusion exists in the peripheral retina for greater displacements than in the fovea, for receptive fields are greater in the peripheral retina. Third, the spatial summation by the receptive fields could be increased by the visual blurring of the reversed phi illusion projected on a screen with defocus lens. Fourth, the amount of reversed phi illusion should be increasing with the decrease of displacement between positive and negative pictures.
Indeed, our visual system processes forward and reversed phi phenomenon in the same way. Our visual system perceives phi phenomenon between individual points of corresponding brightness in successive frames, and phi movement is determined on a local, point-for-point basis mediated by brightness instead of on a global basis.
Neural mechanism underlying sensitivity to reversed phi phenomenon
T4 and T5 motion detectors cells are necessary and sufficient for reversed phi behavior, and there are no other pathways to produce turning responses for reversed phi motion.
Tangential cells show partial voltage response with the stimulation of reversed phi motion.
Hassenstein-Reichardt detector model.
There is substantial responses for reversed-phi in T4 dendrites, and marginal responses in T5 dendrites.
Phi phenomenon and beta movement
Phi phenomenon has long been confused with beta movement; however, the founder of Gestalt School of Psychology, Max Wertheimer, has distinguished the difference between them in 1912. While Phi phenomenon and Beta movement can be considered in the same category in a broader sense, they are quite distinct indeed.
Firstly, the difference is on neuroanatomical level. Visual information is processed in two pathways, one processes position and motion, and the other one processes form and color. If an object is moving or changing position, it would be likely to stimulate both pathways and result in a percept of beta movement. Whereas if the object changes position too rapidly, it might result in a percept of pure movement such as phi phenomenon.
Secondly, phi phenomenon and beta movement are also different perceptually. For phi phenomenon, two stimuli A and B are presented successively, what you perceive is some motion passing over A and B; while for beta movement, still with two stimuli A and B presented in succession, what you perceive would be an object actually passing from position A to position B.
The difference also lies on cognitive level, about how our visual system interprets movement, which is based on the assumption that visual system solves an inverse problem of perceptual interpretation. For neighboring stimuli produced by an object, the visual system has to infer the object since the neighboring stimuli do not give the complete picture of the reality. There are more than one way for our visual system to interpret. Therefore, our visual system needs to put constraints to multiple interpretations in order to acquire the unique and authentic one. Principles employed by our visual system to set the constraints are often relevant to simplicity and likelihood.
Hassenstein–Reichardt detector model
The Hassenstein–Reichardt detector model is considered to be the first mathematical model to propose that our visual system estimates motion by detecting a temporal cross-correlation of light intensities from two neighboring points, in short a theoretical neural circuit for how our visual system track motion. This model can explain and predict phi phenomenon and its reversed version. This model consists two locations and two visual inputs, that if one input at one location is detected, the signal would be sent to the other location. Two visual inputs would be asymmetrically filtered in time, then the visual contrast at one location is multiplied with the time-delayed contrast from the other location. Finally, the multiplication result would be subtracted to obtain an output.
Therefore, two positive or two negative signals would generate a positive output; but if the inputs are one positive and one negative, the output would be negative. This corresponds to the multiplication rule mathematically.
For phi phenomenon, motion detector would develop to detect a change in light intensities at one point on the retina, then our visual system would compute a correlation of that change with a change in light intensities of a neighboring point on the retina, with a short delay.
Reichardt model
The Reichardt model is a more complex form of the simplest Hassenstein–Reichardt detector model, which is considered to be a pairwise model with a common quadratic nonlinearity. As Fourier method is considered to be linear method, Reichardt Model introduces multiplicative nonlinearity when our visual responses to luminance changes at different element locations are combined. In this model, one photoreceptor input would be delayed by a filter to be compared by the multiplication with the other input from a neighboring location. The input would be filtered two times in a mirror-symmetrical manner, one before the multiplication and one after the multiplication, which gives a second-order motion estimation. This generalized Reichardt model allows arbitrary filters before the multiplicative nonlinearity as well as filters post-nonlinearity. Phi Phenomenon is often regarded as first-order motion, but reversed phi could be both first-order and second-order, according to this model.
See also
Color phi phenomenon
Motion perception
External links
Beta movement and Phi phenomenon.
References
Concepts in film theory
Optical illusions
Visual perception
1912 introductions | Phi phenomenon | [
"Physics"
] | 2,318 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
241,649 | https://en.wikipedia.org/wiki/Cochlear%20implant | A cochlear implant (CI) is a surgically implanted neuroprosthesis that provides a person who has moderate-to-profound sensorineural hearing loss with sound perception. With the help of therapy, cochlear implants may allow for improved speech understanding in both quiet and noisy environments. A CI bypasses acoustic hearing by direct electrical stimulation of the auditory nerve. Through everyday listening and auditory training, cochlear implants allow both children and adults to learn to interpret those signals as speech and sound.
The implant has two main components. The outside component is generally worn behind the ear, but could also be attached to clothing, for example, in young children. This component, the sound processor, contains microphones, electronics that include digital signal processor (DSP) chips, battery, and a coil that transmits a signal to the implant across the skin. The inside component, the actual implant, has a coil to receive signals, electronics, and an array of electrodes which is placed into the cochlea, which stimulate the cochlear nerve.
The surgical procedure is performed under general anesthesia. Surgical risks are minimal and most individuals will undergo outpatient surgery and go home the same day. However, some individuals will experience dizziness, and on rare occasions, tinnitus or facial nerve bruising.
From the early days of implants in the 1970s and the 1980s, speech perception via an implant has steadily increased. More than 200,000 people in the United States had received a CI through 2019. Many users of modern implants gain reasonable to good hearing and speech perception skills post-implantation, especially when combined with lipreading. One of the challenges that remain with these implants is that hearing and speech understanding skills after implantation show a wide range of variation across individual implant users. Factors such as age of implantation, parental involvement and education level, duration and cause of hearing loss, how the implant is situated in the cochlea, the overall health of the cochlear nerve, and individual capabilities of re-learning are considered to contribute to this variation.
History
André Djourno and Charles Eyriès invented the original cochlear implant in 1957. Their design distributed stimulation using a single channel.
William House also invented a cochlear implant in 1961. In 1964, Blair Simmons and Robert L. White implanted a single-channel electrode in a patient's cochlea at Stanford University. However, research indicated that these single-channel cochlear implants were of limited usefulness because they cannot stimulate different areas of the cochlea at different times to allow differentiation between low and mid to high frequencies as required for detecting speech.
The next step in the development of the CI was its clinical trial on a cohort of patients. In the late 1960's Robin Michelson and colleague Melvin Bartz construct a cochlear device with biocompatible materials that can be implanted in human patients. This system is implanted in 4 patients, and the report of the hearing results represent a watershed for clinically applicable cochlear implants. Robin Michelson, Robert Schindler, and Michael Merzenich at the University of California, San Francisco, conducted these experiments in 1970 and 1971. Michelson, a clinical pioneer, and Merzenich, a talented basic scientist with a solid foundation in neurophysiology, was an integral element in the development of the UCSF cochlear implant team. Michelson was recognized for implanting a single-channel device into a congenitally deaf woman. She demonstrated auditory sensations from stimulation, as well as pitch perception for stimulus frequencies less that 600 Hz. Unfortunately, This patient had no word recognition. His pioneering work was presented, but not well-received at the 1971 annual meeting of the American Otological Society (Michelson, 1971 and Merzenich et al., 1973). In 1973, the first international conference on the "electrical stimulation of the acoustic nerve as a treatment for profound sensorineural deafness in man" was organized in San Francisco.
NASA engineer Adam Kissiah started working in the mid-1970s on what would become the modern cochlear implant. Kissiah used his knowledge learned while working as an electronics instrumentation engineer for NASA. This work took place over three years, when Kissiah would spend his lunch breaks and evenings in Kennedy Space Center's technical library, studying the impact of engineering principles on the inner ear. In 1977, NASA helped Kissiah obtain a patent for the cochlear implant; Kissiah later sold the patent rights.
The modern multi-channel cochlear implant was independently developed and commercialized by two separate teams—one led by Graeme Clark in Australia and another by Ingeborg Hochmair and her future husband, Erwin Hochmair in Austria, with the Hochmairs' device first implanted in a person in December 1977 and Clark's in August 1978.
Parts
Cochlear implants bypass most of the peripheral auditory system which receives sound and converts that sound into movements of hair cells in the cochlea; the deflection of stereocilia causes an influx of potassium ions into the hair cells, and the depolarisation in turn stimulates calcium influx, which increases release of the neurotransmitter glutamate. Excitation of the cochlear nerve by the neurotransmitter sends signals to the brain, which creates the experience of sound. With an implant, instead, the devices pick up sound and digitize it, convert that digitized sound into electrical signals, and transmit those signals to electrodes embedded in the cochlea. The electrodes electrically stimulate the cochlear nerve, causing it to send signals to the brain.
There are several systems available, but generally they have the following components:
External:
one or more microphones that pick up sound from the environment
a speech processor which selectively filters sound to prioritize audible speech
a transmitter that sends power and the processed sound signals across the skin to the internal device by radio frequency transmission
Internal:
a receiver/stimulator, which receives signals from the speech processor and converts them into electric impulses
an electrode array embedded in the cochlea
A totally implantable cochlear implant (TICI) is currently in development. This new type of cochlear implant incorporates all the current external components of an audio processor into the internal implant. The lack of external components makes the implant invisible from the outside and also means it is less likely to be damaged or broken.
Assistive listening devices
Most modern cochlear implants can be used with a range of assistive listening devices (ALDs), which help people to hear better in challenging listening situations. These situations could include talking on the phone, watching TV or listening to a speaker or teacher. With an ALD, the sound from devices including mobile phones or from an external microphone is sent to the audio processor directly, rather than being picked up by the audio processor's microphone. This direct transmission improves the sound quality for the user, making it easier to talk on the phone or stream music.
ALDs come in many forms, such as , pens, and specialist battery pack covers. Modern ALDs are usually able to receive sound from any Bluetooth device, including phones and computers, before transmitting it wirelessly to the audio processor. Most cochlear implants are also compatible with older ALD technology, such as a telecoil.
Surgical procedure
Surgical techniques
Implantation of children and adults can be done safely with few surgical complications and most individuals will undergo outpatient surgery and go home the same day.
Occasionally, the very young, the very old, or patients with a significant number of medical diseases at once may remain for overnight observation in the hospital. The procedure can be performed in an ambulatory surgery center in healthy individuals.
The surgical procedure most often used to implant the device is called mastoidectomy with facial recess approach (MFRA).
The procedure is usually done under general anesthesia. Complications of the procedure are rare, but include mastoiditis, otitis media (acute or with effusion), shifting of the implanted device requiring a second procedure, damage to the facial nerve, damage to the chorda tympani, and wound infections.
Cochlear implantation surgery is considered a clean procedure with an infection rate of less than 3%. Guidelines suggest that routine prophylactic antibiotics are not required. However, the potential cost of a postoperative infection is high (including the possibility of implant loss); therefore, a single preoperative intravenous injection of antibiotics is recommended.
The rate of complications is about 12% for minor complications and 3% for major complications; major complications include infections, facial paralysis, and device failure.
Although up to 20 new cases of post-CI bacterial meningitis occur annually worldwide, data demonstrates a reducing incidence. To avoid the risk of bacterial meningitis, the CDC recommends that adults and children undergoing CI receive age-appropriate vaccines that generate antibodies to Streptococcus pneumoniae.
The rate of transient facial nerve palsy is estimated to be approximately 1%. Device failure requiring reimplantation is estimated to occur 2.5–6% of the time. Up to one-third of people experience disequilibrium, vertigo, or vestibular weakness lasting more than one week after the procedure; in people under 70 these symptoms generally resolve over weeks to months, but in people over 70 the problems tend to persist.
In the past, cochlear implants were only approved for people who were deaf in both ears; a cochlear implant had been used experimentally in some people who had acquired deafness in one ear after they had learned how to speak, and none who were deaf in one ear from birth; clinical studies had been too small to draw generalizations.
Alternative surgical technique
Other approaches, such as going through the suprameatal triangle, are used. A systematic literature review published in 2016 found that studies comparing the two approaches were generally small, not randomized, and retrospective so were not useful for making generalizations; it is not known which approach is safer or more effective.
Endoscopic cochlear implantation
With the increased utilization of endoscopic ear surgery as popularized by professor Tarabichi, there have been multiple published reports on the use of endoscopic technique in cochlear implant surgery. However, this has been motivated by marketing and there is clear indication of increased morbidity associated with this technique as reported by the pioneer of endoscopic ear surgery.
Complications of cochlear implant surgery
As cochlear implant surgical techniques have advanced over the last four decades, the global complication rate for CI surgery in both children and adults has decreased from more than 35% in 1991 to less than 10% at present. The risk of postoperative facial nerve injury has also decreased over the last several decades to less than 1%, most of which demonstrated complete return of function within six months. The rate of permanent paralysis is approximately 1 per 1,000 surgeries and likely less than that in experienced CI centers.
The majority of complications following CI surgery are minor requiring only conservative medical management or prolongation of hospital stay. Less than 5% of all complications are major resulting in surgical intervention or readmission to the hospital. Reported rates of revision cochlear implant surgery vary in adults and children from 3.8% to 8% with the most common indications being device failure, infection, and migration of the implant or electrode. Disequilibrium and vertigo after CI surgery can occur but the symptoms tend to be mild and short-lived. CI rarely results in significant or persistent adverse effects on the vestibular system when hearing conservation surgical techniques are practiced. Moreover, gait and postural stability may actually improve post-implantation.
Outcomes
Cochlear implant outcomes can be measured using speech recognition ability and functional improvements measured using patient reported outcome measures. While the degree of improvement after cochlear implantation may vary, the majority of patients who receive cochlear implants demonstrate a significant improvement in speech recognition ability compared to their preoperative condition.
Multiple meta-analyses of the literature from 2018 showed that CI users have large improvements in quality of life after cochlear implantation. This improvement occurs in many different facets of life that extends beyond communication including improved ability to engage in social activities; decreased mental effort from listening; and improved environmental sound awareness. Deaf adolescents with cochlear implants attending mainstream educational settings report high levels of scholastic self-esteem, friendship self-esteem, and global self-esteem. They also tend to hold mostly positive attitudes towards their cochlear implants, and as a part of their identity, a majority either do "not really think about" their hearing loss, or are "proud of it." Though advancements in cochlear implant technology have helped patients in their understanding of language, users are still unable to understand suprasegmental portions of language, which includes pitch.
A study by Johns Hopkins University determined that for a three-year-old child who receives them, cochlear implants can save $30,000 to $50,000 in special-education costs for elementary and secondary schools as the child is more likely to be mainstreamed in school and thus use fewer support services than similarly deaf children.
Efficacy
A 2019 study found that bilateral cochlear implantation was widely regarded as the most beneficial hearing intervention for acceptable candidates, although it is more likely to be performed and reimbursed in children than adults. The study also found that the efficacy of bilateral implantation could be improved by enhancing communication between the two implants and by developing sound coding strategies specifically for bilateral users.
Early research reviews found that the ability to communicate in spoken language was better the earlier the implantation was performed. The reviews also found that, overall, while cochlear implants provide open-set speech understanding for the majority of implanted profoundly hearing-impaired children, it was not possible to accurately predict the specific outcome of the given implanted child. Research since then has reported long-term socio-economic benefits for children as well as audiological outcomes including improved sound localization and speech perception. A consensus statement from the European Bilateral Pediatric Cochlear Implant Forum also confirmed the importance of bilateral cochlear implantation in children. In adults, new research shows that bilateral implantation can improve quality of life and speech intelligibility in quiet and noise.
A 2015 review examined whether CI implantation to treat people with bilateral hearing loss had any effect on tinnitus. This review found the quality of evidence to be poor and the results variable: overall total tinnitus suppression rates for patients who had tinnitus prior to surgery varied from 8% to 45% of people who received CI; decrease of tinnitus was seen in 25% to 72%, of people; for 0% to 36% of the people there was no change; increase of tinnitus occurred in between 0% to 25% of patients; and, in between 0 and 10% of cases, people who did not have tinnitus before the procedure, got it. Further research found that the electrical stimulation of the CI is at least partly responsible for the general reduction in symptoms. A 2019 study found that although tinnitus suppression in patients with CIs is multifactorial, simply having the CI switched on without any audiological input (while standing alone in a soundproof booth) reduced the symptoms of tinnitus. This would suggest that it is the electrical stimulation that explains the decrease in tinnitus symptoms for many patients, and not only the increased access to sound.
A 2015 literature review on the use of CI for people with auditory neuropathy spectrum disorder found that, as of that date, description and diagnosis of the condition was too heterogeneous to make clear claims about whether CI is a safe and effective way to manage it.
The data for cochlear implant outcomes in older adults differs. A 2016 research study found that age at implantation was highly correlated with post-operative speech understanding performance for various test measures. In this study, people who were implanted at age 65 or older performed significantly worse on speech perception testing in quiet and in noisy conditions compared to younger CI users. Other studies have shown different outcomes, with some reporting that adults implanted at the age of 65 and older showed audiological and speech discrimination outcomes similar to younger adults. While cochlear implants demonstrate substantial benefit across all age groups, results will depend on cognitive factors that are ultimately highly age dependent. However, studies have documented the benefit of cochlear implants in octogenarians.
The effects of aging on central auditory processing abilities are thought to play an important role in impacting an individual's speech perception with a cochlear implant. The Lancet reported that untreated hearing loss in adults is the number one modifiable risk factor for dementia. In 2017, a study also reported that adults using a cochlear implant had significantly improved cognitive outcomes including working memory, reaction time, and cognitive flexibility compared to people who were waiting to receive a cochlear implant.
Prolonged duration of deafness is another factor that is thought to have a negative impact on overall speech understanding outcomes for CI users. However, a study found no statistical difference in the speech understanding abilities of CI patients over 65 who had been hearing impaired for 30 years or more prior to implantation. In general, outcomes for CI patients are dependent upon the individual's level of motivation, expectations, exposure to speech stimuli and consistent participation in aural rehabilitation programs.
A 2016 systematic review of CI for people with unilateral hearing loss (UHL) found that of the studies conducted and published, none were randomized, only one evaluated a control group, and no study was blinded. After eliminating multiple uses of the same subjects, the authors found that 137 people with UHL had received a CI. While acknowledging the weakness of the data, the authors found that CI in people with UHL improves sound localization compared with other treatments in people who lost hearing after they learned to speak; in the one study that examined this, CI did improve sound localization in people with UHL who lost hearing before learning to speak. It appeared to improve speech perception and to reduce tinnitus.
In terms of quality of life, several studies have shown that cochlear implants are beneficial in many aspects of quality of life, including communication improvements and positive effects on social, emotional, psychological and physical well-being. A 2017 narrative review also concluded that the quality of life scores of children using cochlear implants were comparable to those of children without hearing loss. Studies involving adults of all ages reported significant improvement in QoL after implantation when compared to adults with hearing aids. This was often independent of audiological performance.
Society and culture
Usage
approximately 188,000 individuals had been fitted with cochlear implants. the same publication cited approximately 324,000 cochlear implant devices having been surgically implanted. In the U.S., roughly 58,000 devices were implanted in adults and 38,000 in children. the Ear Foundation in the United Kingdom, estimates the number of cochlear implant recipients in the world to be about 600,000. The American Cochlear Implant Alliance estimates that 217,000 people received CIs in the United States through the end of 2019.
Cost and insurance
Cochlear implantation includes the medical device as well as related services and procedures including pre-operative testing, the surgery, and aftercare that includes audiology and speech language pathology services. These are provided over time by a team of clinicians with specialized training. All of these services, as well as the cochlear implant device and related peripherals, are part of the medical intervention and are typically covered by health insurance in the United States and many areas of the world. These medical services and procedures include candidacy evaluation, hospital services inclusive of supplies and medications used during surgery, surgeon and other physicians such as anesthesiologists, the cochlear implant device and system kit, and programming and (re)habilitation following the surgery. In many countries around the world, the cost of cochlear implantation and aftercare is covered by health insurance. However, financial factors impact the evaluation selection process. Children with public health insurance or no health insurance are less likely to receive the implant before 2 years old.
In the US, as cochlear implants have become more commonplace and accepted as a valuable and cost effective health intervention, insurance coverage has expanded to include private insurance, Medicare, Tricare, the VA System, other federal health plans, and Medicaid. In September 2022 the Centers for Medicare & Medicaid Services expanded coverage of cochlear implants for appropriate candidates under Medicare. Candidates must demonstrate limited benefit with appropriately fit hearing aids but with criteria now defined by test scores of less than or equal to 60% correct in the best-aided listening condition on recorded tests of open-set sentence recognition. Just as there is with any medical procedure, there are typically co-pays which vary depending upon the insurance plan.
In the United Kingdom, the NHS covers cochlear implants in full, as does Medicare in Australia, and the Department of Health in Ireland, Seguridad Social in Spain, Sistema Sanitario Nazionale in Italy, Sécurité Sociale in France and Israel, and the Ministry of Health or ACC (depending on the cause of deafness) in New Zealand. In Germany and Austria, the cost is covered by most health insurance organizations.
Public health
6.1% of the world population live with hearing loss, and it is predicted that by 2050, more than 900 million people around the globe will have a disabling hearing loss. According to a WHO report, unaddressed hearing loss costs the world 980 billion dollars annually. Particularly hard hit are the healthcare and educational sectors, as well as societal costs. 53% of these costs are attributable to low- and middle-income countries.
The WHO reports that cochlear implants have been shown to be a cost-effective way to mitigate the challenges of hearing loss. In a low-to-middle-income setting, every dollar invested in unilateral cochlear implants has a return on investment of 1.46 dollars. This rises to a return on investment of 4.09 dollars in an upper-middle-income setting. A study in Colombia assessed the lifetime investments made in 68 children who received cochlear implants at an early age. Taking into account the cost of the device and any other medical costs, follow-up, speech therapy, batteries and travel, each child required an average investment of US$99 000 over the course of their life (assuming a life span of 78 years for women and 72 years for men). The study concluded that for every dollar invested in rehabilitation of a child with a cochlear implant, there was a return on investment of US$2.07.
Manufacturers
As of 2021, four cochlear implant devices approved for use in the United States are manufactured by Cochlear Limited, the Advanced Bionics division of Sonova, MED-EL, and Oticon Medical.
In Europe, Africa, Asia, South America, and Canada, an additional device manufactured by Neurelec (later acquired by Oticon Medical) was available. A device made by Nurotron (China) was also available in some parts of the world. Each manufacturer has adapted some of the successful innovations of the other companies to its own devices. There is no consensus that any one of these implants is superior to the others. Users of all devices report a wide range of performance after implantation.
Criticism and controversy
Much of the strongest objection to cochlear implants has come from within the deaf community, some of whom are pre-lingually deaf people whose first language is a sign language. Some in the deaf community call cochlear implants audist and an affront to their culture, which, as they view it, is a minority threatened by the hearing majority. This is an old problem for the deaf community, going back as far as the 18th century with the argument of manualism vs. oralism. This is consistent with medicalisation and the standardisation of the "normal" body in the 19th century when differences between normal and abnormal began to be debated. It is important to consider the sociocultural context, particularly in regards to the deaf community, which has its own unique language and culture. This accounts for the cochlear implant being seen as an affront to their culture, as many do not believe that deafness is something that needs to be cured. However, it has also been argued that this does not necessarily have to be the case: the cochlear implant can act as a tool deaf people can use to access the "hearing world" without losing their deaf identity.
Cochlear implants for congenitally deaf children are most effective when implanted at a young age. Children who have had confirmed severe hearing loss can receive the implant as young as 9 months old. Evidence shows that deaf children of deaf parents (or with fluent signers as daily caregivers) learn signed language as effectively as hearing peers. Some deaf-community advocates recommend that all deaf children should learn sign language from birth, but more than 90% of deaf children are born to hearing parents. Since it takes years to become fluent in sign language, deaf children who grow up without amplification such as hearing aids or cochlear implants will not have daily access to fluent language models in households without fluent signers.
Critics of cochlear implants from deaf cultures also assert that the cochlear implant and the subsequent therapy often become the focus of the child's identity at the expense of language acquisition and ease of communication in sign language and deaf identity. They believe that measuring a child's success only by their mastery of speech will lead to a poor self-image as "disabled" (because the implants do not produce normal hearing) rather than having the healthy self-concept of a proudly deaf person. However, these assertions are not supported by research. The first children to receive cochlear implants as infants are only in their 20s (as of 2020), and anecdotal evidence points to a high level of satisfaction in this cohort, most of whom don't consider their deafness their primary identity.
Children with cochlear implants are most likely to be educated with listening and spoken language, without sign language and are often not educated with other deaf children who use sign language. Cochlear implants have been one of the technological and social factors implicated in the decline of sign languages in the developed world. Some Deaf activists have labeled the widespread implantation of children as a cultural genocide.
As the trend for cochlear implants in children grows, deaf-community advocates have tried to counter the "either or" formulation of oralism vs. manualism with a "both and" or "bilingual-bicultural" approach; some schools are now successfully integrating cochlear implants with sign language in their educational programs. However, there is disagreement among researchers about the effectiveness of methods using both sign and speech as compared to sign or speech alone.
Another point of controversy made by advocates are that there are racial disparities in the cochlear implantation evaluation process. Data taken from 2010-2020 at one academic tertiary care institution showed that 68.5% of patients referred for evaluation were White, 18.5% were Black, and 12.3% were Asian, however the institution's main service area was 46.9% White, 42.3% Black, and 7.7% Asian. It was also shown that the Black patients who were referred for evaluation to receive the implants had greater hearing loss compared to White patients who were also referred. Based on this study, it is shown that Black patients receive cochlear implants at a disproportionately lower rate than White patients.
Notable recipients (partial list)
Jack Ashley, Baron Ashley of Stoke - British MP
Michael Chorost - American writer
Dorinda Cox - Australian politician
Lou Ferrigno - bodybuilder and actor
Tasha Ghouri - English TV personality
Daisy Kent - American TV personality
Elena LaQuatra - WTAE-TV news anchor
Mia le Roux - Miss South Africa 2024
Rush Limbaugh - American conservative radio host
Millicent Simmonds - American actress
Heather Whitestone - 1995 Miss America
Malala Yousafzai - Nobel peace prize recipient
See also
3D printing
Auditory brainstem response
Auditory brainstem implant
Bone-anchored hearing aid
Bone conduction
Brain implant
Ear trumpet
Electric Acoustic Stimulation
Electrophonic hearing
Neuroprosthetics
Noise health effects
Visual prosthesis
References
Further reading
Biderman, Beverly. Wired for Sound: A Journey into Hearing Rev. 2016 Briar Hill Publishing
External links
Cochlear Implants information from the National Institutes of Health
Cochlear Implants information from the U.S. Food & Drug Administration
1979 introductions
1979 in science
Artificial organs
Audiology
Australian inventions
Bionics
Cybernetics
Implants (medicine)
Medical devices
NASA spin-off technologies
Neuroprosthetics
Otology
Otorhinolaryngology
Ear surgery
Auditory system | Cochlear implant | [
"Engineering",
"Biology"
] | 6,073 | [
"Bionics",
"Medical technology",
"Medical devices",
"Artificial organs"
] |
242,001 | https://en.wikipedia.org/wiki/Nuclear%20chemistry | Nuclear chemistry is the sub-field of chemistry dealing with radioactivity, nuclear processes, and transformations in the nuclei of atoms, such as nuclear transmutation and nuclear properties.
It is the chemistry of radioactive elements such as the actinides, radium and radon together with the chemistry associated with equipment (such as nuclear reactors) which are designed to perform nuclear processes. This includes the corrosion of surfaces and the behavior under conditions of both normal and abnormal operation (such as during an accident). An important area is the behavior of objects and materials after being placed into a nuclear waste storage or disposal site.
It includes the study of the chemical effects resulting from the absorption of radiation within living animals, plants, and other materials. The radiation chemistry controls much of radiation biology as radiation has an effect on living things at the molecular scale. To explain it another way, the radiation alters the biochemicals within an organism, the alteration of the bio-molecules then changes the chemistry which occurs within the organism; this change in chemistry then can lead to a biological outcome. As a result, nuclear chemistry greatly assists the understanding of medical treatments (such as cancer radiotherapy) and has enabled these treatments to improve.
It includes the study of the production and use of radioactive sources for a range of processes. These include radiotherapy in medical applications; the use of radioactive tracers within industry, science and the environment, and the use of radiation to modify materials such as polymers.
It also includes the study and use of nuclear processes in non-radioactive areas of human activity. For instance, nuclear magnetic resonance (NMR) spectroscopy is commonly used in synthetic organic chemistry and physical chemistry and for structural analysis in macro-molecular chemistry.
History
After Wilhelm Röntgen discovered X-rays in 1895, many scientists began to work on ionizing radiation. One of these was Henri Becquerel, who investigated the relationship between phosphorescence and the blackening of photographic plates. When Becquerel (working in France) discovered that, with no external source of energy, the uranium generated rays which could blacken (or fog) the photographic plate, radioactivity was discovered. Marie Skłodowska-Curie (working in Paris) and her husband Pierre Curie isolated two new radioactive elements from uranium ore. They used radiometric methods to identify which stream the radioactivity was in after each chemical separation; they separated the uranium ore into each of the different chemical elements that were known at the time, and measured the radioactivity of each fraction. They then attempted to separate these radioactive fractions further, to isolate a smaller fraction with a higher specific activity (radioactivity divided by mass). In this way, they isolated polonium and radium. It was noticed in about 1901 that high doses of radiation could cause an injury in humans. Henri Becquerel had carried a sample of radium in his pocket and as a result he suffered a highly localized dose which resulted in a radiation burn. This injury resulted in the biological properties of radiation being investigated, which in time resulted in the development of medical treatment.
Ernest Rutherford, working in Canada and England, showed that radioactive decay can be described by a simple equation (a linear first degree derivative equation, now called first order kinetics), implying that a given radioactive substance has a characteristic "half-life" (the time taken for the amount of radioactivity present in a source to diminish by half). He also coined the terms alpha, beta and gamma rays, he converted nitrogen into oxygen, and most importantly he supervised the students who conducted the Geiger–Marsden experiment (gold foil experiment) which showed that the 'plum pudding model' of the atom was wrong. In the plum pudding model, proposed by J. J. Thomson in 1904, the atom is composed of electrons surrounded by a 'cloud' of positive charge to balance the electrons' negative charge. To Rutherford, the gold foil experiment implied that the positive charge was confined to a very small nucleus leading first to the Rutherford model, and eventually to the Bohr model of the atom, where the positive nucleus is surrounded by the negative electrons.
In 1934, Marie Curie's daughter (Irène Joliot-Curie) and son-in-law (Frédéric Joliot-Curie) were the first to create artificial radioactivity: they bombarded boron with alpha particles to make the neutron-poor isotope nitrogen-13; this isotope emitted positrons. In addition, they bombarded aluminium and magnesium with neutrons to make new radioisotopes.
In the early 1920s Otto Hahn created a new line of research. Using the "emanation method", which he had recently developed, and the "emanation ability", he founded what became known as "applied radiochemistry" for the researching of general chemical and physical-chemical questions. In 1936 Cornell University Press published a book in English (and later in Russian) titled Applied Radiochemistry, which contained the lectures given by Hahn when he was a visiting professor at Cornell University in Ithaca, New York, in 1933. This important publication had a major influence on almost all nuclear chemists and physicists in the United States, the United Kingdom, France, and the Soviet Union during the 1930s and 1940s, laying the foundation for modern nuclear chemistry.
Hahn and Lise Meitner discovered radioactive isotopes of radium, thorium, protactinium and uranium. He also discovered the phenomena of radioactive recoil and nuclear isomerism, and pioneered rubidium–strontium dating. In 1938, Hahn, Lise Meitner and Fritz Strassmann discovered nuclear fission, for which Hahn received the 1944 Nobel Prize for Chemistry. Nuclear fission was the basis for nuclear reactors and nuclear weapons. Hahn is referred to as the father of nuclear chemistry and godfather of nuclear fission.
Main areas
Radiochemistry is the chemistry of radioactive materials, in which radioactive isotopes of elements are used to study the properties and chemical reactions of non-radioactive isotopes (often within radiochemistry the absence of radioactivity leads to a substance being described as being inactive as the isotopes are stable).
For further details please see the page on radiochemistry.
Radiation chemistry
Radiation chemistry is the study of the chemical effects of radiation on matter; this is very different from radiochemistry as no radioactivity needs to be present in the material which is being chemically changed by the radiation. An example is the conversion of water into hydrogen gas and hydrogen peroxide. Prior to radiation chemistry, it was commonly believed that pure water could not be destroyed.
Initial experiments were focused on understanding the effects of radiation on matter. Using a X-ray generator, Hugo Fricke studied the biological effects of radiation as it became a common treatment option and diagnostic method. Fricke proposed and subsequently proved that the energy from X - rays were able to convert water into activated water, allowing it to react with dissolved species.
Chemistry for nuclear power
Radiochemistry, radiation chemistry and nuclear chemical engineering play a very important role for uranium and thorium fuel precursors synthesis, starting from ores of these elements, fuel fabrication, coolant chemistry, fuel reprocessing, radioactive waste treatment and storage, monitoring of radioactive elements release during reactor operation and radioactive geological storage, etc.
Study of nuclear reactions
A combination of radiochemistry and radiation chemistry is used to study nuclear reactions such as fission and fusion. Some early evidence for nuclear fission was the formation of a short-lived radioisotope of barium which was isolated from neutron irradiated uranium (139Ba, with a half-life of 83 minutes and 140Ba, with a half-life of 12.8 days, are major fission products of uranium). At the time, it was thought that this was a new radium isotope, as it was then standard radiochemical practice to use a barium sulfate carrier precipitate to assist in the isolation of radium. More recently, a combination of radiochemical methods and nuclear physics has been used to try to make new 'superheavy' elements; it is thought that islands of relative stability exist where the nuclides have half-lives of years, thus enabling weighable amounts of the new elements to be isolated. For more details of the original discovery of nuclear fission see the work of Otto Hahn.
The nuclear fuel cycle
This is the chemistry associated with any part of the nuclear fuel cycle, including nuclear reprocessing. The fuel cycle includes all the operations involved in producing fuel, from mining, ore processing and enrichment to fuel production (Front-end of the cycle). It also includes the 'in-pile' behavior (use of the fuel in a reactor) before the back end of the cycle. The back end includes the management of the used nuclear fuel in either a spent fuel pool or dry storage, before it is disposed of into an underground waste store or reprocessed.
Normal and abnormal conditions
The nuclear chemistry associated with the nuclear fuel cycle can be divided into two main areas, one area is concerned with operation under the intended conditions while the other area is concerned with maloperation conditions where some alteration from the normal operating conditions has occurred or (more rarely) an accident is occurring. Without this process, none of this would be true.
Reprocessing
Law
In the United States, it is normal to use fuel once in a power reactor before placing it in a waste store. The long-term plan is currently to place the used civilian reactor fuel in a deep store. This non-reprocessing policy was started in March 1977 because of concerns about nuclear weapons proliferation. President Jimmy Carter issued a Presidential directive which indefinitely suspended the commercial reprocessing and recycling of plutonium in the United States. This directive was likely an attempt by the United States to lead other countries by example, but many other nations continue to reprocess spent nuclear fuels. The Russian government under President Vladimir Putin repealed a law which had banned the import of used nuclear fuel, which makes it possible for Russians to offer a reprocessing service for clients outside Russia (similar to that offered by BNFL).
PUREX chemistry
The current method of choice is to use the PUREX liquid-liquid extraction process which uses a tributyl phosphate/hydrocarbon mixture to extract both uranium and plutonium from nitric acid. This extraction is of the nitrate salts and is classed as being of a solvation mechanism. For example, the extraction of plutonium by an extraction agent (S) in a nitrate medium occurs by the following reaction.
Pu4+aq + 4NO3−aq + 2Sorganic → [Pu(NO3)4S2]organic
A complex bond is formed between the metal cation, the nitrates and the tributyl phosphate, and a model compound of a dioxouranium(VI) complex with two nitrate anions and two triethyl phosphate ligands has been characterised by X-ray crystallography.
When the nitric acid concentration is high the extraction into the organic phase is favored, and when the nitric acid concentration is low the extraction is reversed (the organic phase is stripped of the metal). It is normal to dissolve the used fuel in nitric acid, after the removal of the insoluble matter the uranium and plutonium are extracted from the highly active liquor. It is normal to then back extract the loaded organic phase to create a medium active liquor which contains mostly uranium and plutonium with only small traces of fission products. This medium active aqueous mixture is then extracted again by tributyl phosphate/hydrocarbon to form a new organic phase, the metal bearing organic phase is then stripped of the metals to form an aqueous mixture of only uranium and plutonium. The two stages of extraction are used to improve the purity of the actinide product, the organic phase used for the first extraction will suffer a far greater dose of radiation. The radiation can degrade the tributyl phosphate into dibutyl hydrogen phosphate. The dibutyl hydrogen phosphate can act as an extraction agent for both the actinides and other metals such as ruthenium. The dibutyl hydrogen phosphate can make the system behave in a more complex manner as it tends to extract metals by an ion exchange mechanism (extraction favoured by low acid concentration), to reduce the effect of the dibutyl hydrogen phosphate it is common for the used organic phase to be washed with sodium carbonate solution to remove the acidic degradation products of the tributyl phosphatioloporus.
New methods being considered for future use
The PUREX process can be modified to make a UREX (URanium EXtraction) process which could be used to save space inside high level nuclear waste disposal sites, such as Yucca Mountain nuclear waste repository, by removing the uranium which makes up the vast majority of the mass and volume of used fuel and recycling it as reprocessed uranium.
The UREX process is a PUREX process which has been modified to prevent the plutonium being extracted. This can be done by adding a plutonium reductant before the first metal extraction step. In the UREX process, ~99.9% of the uranium and >95% of technetium are separated from each other and the other fission products and actinides. The key is the addition of acetohydroxamic acid (AHA) to the extraction and scrubs sections of the process. The addition of AHA greatly diminishes the extractability of plutonium and neptunium, providing greater proliferation resistance than with the plutonium extraction stage of the PUREX process.
Adding a second extraction agent, octyl(phenyl)-N,N-dibutyl carbamoylmethyl phosphine oxide (CMPO) in combination with tributylphosphate, (TBP), the PUREX process can be turned into the TRUEX (TRansUranic EXtraction) process this is a process which was invented in the US by Argonne National Laboratory, and is designed to remove the transuranic metals (Am/Cm) from waste. The idea is that by lowering the alpha activity of the waste, the majority of the waste can then be disposed of with greater ease. In common with PUREX this process operates by a solvation mechanism.
As an alternative to TRUEX, an extraction process using a malondiamide has been devised. The DIAMEX (DIAMideEXtraction) process has the advantage of avoiding the formation of organic waste which contains elements other than carbon, hydrogen, nitrogen, and oxygen. Such an organic waste can be burned without the formation of acidic gases which could contribute to acid rain. The DIAMEX process is being worked on in Europe by the French CEA. The process is sufficiently mature that an industrial plant could be constructed with the existing knowledge of the process. In common with PUREX this process operates by a solvation mechanism.
Selective Actinide Extraction (SANEX). As part of the management of minor actinides, it has been proposed that the lanthanides and trivalent minor actinides should be removed from the PUREX raffinate by a process such as DIAMEX or TRUEX. In order to allow the actinides such as americium to be either reused in industrial sources or used as fuel the lanthanides must be removed. The lanthanides have large neutron cross sections and hence they would poison a neutron-driven nuclear reaction. To date, the extraction system for the SANEX process has not been defined, but currently, several different research groups are working towards a process. For instance, the French CEA is working on a bis-triazinyl pyridine (BTP) based process.
Other systems such as the dithiophosphinic acids are being worked on by some other workers.
This is the UNiversal EXtraction process which was developed in Russia and the Czech Republic, it is a process designed to remove all of the most troublesome (Sr, Cs and minor actinides) radioisotopes from the raffinates left after the extraction of uranium and plutonium from used nuclear fuel. The chemistry is based upon the interaction of caesium and strontium with poly ethylene oxide (poly ethylene glycol) and a cobalt carborane anion (known as chlorinated cobalt dicarbollide). The actinides are extracted by CMPO, and the diluent is a polar aromatic such as nitrobenzene. Other diluents such as meta-nitrobenzotrifluoride and phenyl trifluoromethyl sulfone have been suggested as well.
Absorption of fission products on surfaces
Another important area of nuclear chemistry is the study of how fission products interact with surfaces; this is thought to control the rate of release and migration of fission products both from waste containers under normal conditions and from power reactors under accident conditions. Like chromate and molybdate, the 99TcO4 anion can react with steel surfaces to form a corrosion resistant layer. In this way, these metaloxo anions act as anodic corrosion inhibitors. The formation of 99TcO2 on steel surfaces is one effect which will retard the release of 99Tc from nuclear waste drums and nuclear equipment which has been lost before decontamination (e.g. submarine reactors lost at sea). This 99TcO2 layer renders the steel surface passive, inhibiting the anodic corrosion reaction. The radioactive nature of technetium makes this corrosion protection impractical in almost all situations. It has also been shown that 99TcO4 anions react to form a layer on the surface of activated carbon (charcoal) or aluminium. A short review of the biochemical properties of a series of key long lived radioisotopes can be read on line.
99Tc in nuclear waste may exist in chemical forms other than the 99TcO4 anion, these other forms have different chemical properties.
Similarly, the release of iodine-131 in a serious power reactor accident could be retarded by absorption on metal surfaces within the nuclear plant.
Education
Despite the growing use of nuclear medicine, the potential expansion of nuclear power plants, and worries about protection against nuclear threats and the management of the nuclear waste generated in past decades, the number of students opting to specialize in nuclear and radiochemistry has decreased significantly over the past few decades. Now, with many experts in these fields approaching retirement age, action is needed to avoid a workforce gap in these critical fields, for example by building student interest in these careers, expanding the educational capacity of universities and colleges, and providing more specific on-the-job training.
Nuclear and Radiochemistry (NRC) is mostly being taught at university level, usually first at the Master- and PhD-degree level. In Europe, as substantial effort is being done to harmonize and prepare the NRC education for the industry's and society's future needs. This effort is being coordinated in a project funded by the Coordinated Action supported by the European Atomic Energy Community's 7th Framework Program. Although NucWik is primarily aimed at teachers, anyone interested in nuclear and radiochemistry is welcome and can find a lot of information and material explaining topics related to NRC.
Spinout areas
Some methods first developed within nuclear chemistry and physics have become so widely used within chemistry and other physical sciences that they may be best thought of as separate from normal nuclear chemistry. For example, the isotope effect is used so extensively to investigate chemical mechanisms and the use of cosmogenic isotopes and long-lived unstable isotopes in geology that it is best to consider much of isotopic chemistry as separate from nuclear chemistry.
Kinetics (use within mechanistic chemistry)
The mechanisms of chemical reactions can be investigated by observing how the kinetics of a reaction is changed by making an isotopic modification of a substrate, known as the kinetic isotope effect. This is now a standard method in organic chemistry. Briefly, replacing normal hydrogen (protons) by deuterium within a molecule causes the molecular vibrational frequency of X-H (for example C-H, N-H and O-H) bonds to decrease, which leads to a decrease in vibrational zero-point energy. This can lead to a decrease in the reaction rate if the rate-determining step involves breaking a bond between hydrogen and another atom. Thus, if the reaction changes in rate when protons are replaced by deuteriums, it is reasonable to assume that the breaking of the bond to hydrogen is part of the step which determines the rate.
Uses within geology, biology and forensic science
Cosmogenic isotopes are formed by the interaction of cosmic rays with the nucleus of an atom. These can be used for dating purposes and for use as natural tracers. In addition, by careful measurement of some ratios of stable isotopes it is possible to obtain new insights into the origin of bullets, ages of ice samples, ages of rocks, and the diet of a person can be identified from a hair or other tissue sample. (See Isotope geochemistry and Isotopic signature for further details).
Biology
Within living things, isotopic labels (both radioactive and nonradioactive) can be used to probe how the complex web of reactions which makes up the metabolism of an organism converts one substance to another. For instance a green plant uses light energy to convert water and carbon dioxide into glucose by photosynthesis. If the oxygen in the water is labeled, then the label appears in the oxygen gas formed by the plant and not in the glucose formed in the chloroplasts within the plant cells.
For biochemical and physiological experiments and medical methods, a number of specific isotopes have important applications.
Stable isotopes have the advantage of not delivering a radiation dose to the system being studied; however, a significant excess of them in the organ or organism might still interfere with its functionality, and the availability of sufficient amounts for whole-animal studies is limited for many isotopes. Measurement is also difficult, and usually requires mass spectrometry to determine how much of the isotope is present in particular compounds, and there is no means of localizing measurements within the cell.
2H (deuterium), the stable isotope of hydrogen, is a stable tracer, the concentration of which can be measured by mass spectrometry or NMR. It is incorporated into all cellular structures. Specific deuterated compounds can also be produced.
15N, a stable isotope of nitrogen, has also been used. It is incorporated mainly into proteins.
Radioactive isotopes have the advantages of being detectable in very low quantities, in being easily measured by scintillation counting or other radiochemical methods, and in being localizable to particular regions of a cell, and quantifiable by autoradiography. Many compounds with the radioactive atoms in specific positions can be prepared, and are widely available commercially. In high quantities they require precautions to guard the workers from the effects of radiation—and they can easily contaminate laboratory glassware and other equipment. For some isotopes the half-life is so short that preparation and measurement is difficult.
By organic synthesis it is possible to create a complex molecule with a radioactive label that can be confined to a small area of the molecule. For short-lived isotopes such as 11C, very rapid synthetic methods have been developed to permit the rapid addition of the radioactive isotope to the molecule. For instance a palladium catalysed carbonylation reaction in a microfluidic device has been used to rapidly form amides and it might be possible to use this method to form radioactive imaging agents for PET imaging.
3H (tritium), the radioisotope of hydrogen, is available at very high specific activities, and compounds with this isotope in particular positions are easily prepared by standard chemical reactions such as hydrogenation of unsaturated precursors. The isotope emits very soft beta radiation, and can be detected by scintillation counting.
11C, carbon-11 is usually produced by cyclotron bombardment of 14N with protons. The resulting nuclear reaction is . Additionally, carbon-11 can also be made using a cyclotron; boron in the form of boric oxide is reacted with protons in a (p,n) reaction. Another alternative route is to react 10B with deuterons. By rapid organic synthesis, the 11C compound formed in the cyclotron is converted into the imaging agent which is then used for PET.
14C, carbon-14 can be made (as above), and it is possible to convert the target material into simple inorganic and organic compounds. In most organic synthesis work it is normal to try to create a product out of two approximately equal sized fragments and to use a convergent route, but when a radioactive label is added, it is normal to try to add the label late in the synthesis in the form of a very small fragment to the molecule to enable the radioactivity to be localised in a single group. Late addition of the label also reduces the number of synthetic stages where radioactive material is used.
18F, fluorine-18 can be made by the reaction of neon with deuterons, 20Ne reacts in a (d,4He) reaction. It is normal to use neon gas with a trace of stable fluorine (19F2). The 19F2 acts as a carrier which increases the yield of radioactivity from the cyclotron target by reducing the amount of radioactivity lost by absorption on surfaces. However, this reduction in loss is at the cost of the specific activity of the final product.
Nuclear spectroscopy
Nuclear spectroscopy are methods that use the nucleus to obtain information of the local structure in matter. Important methods are NMR (see below), Mössbauer spectroscopy and Perturbed angular correlation. These methods use the interaction of the hyperfine field with the nucleus' spin. The field can be magnetic or/and electric and are created by the electrons of the atom and its surrounding neighbours. Thus, these methods investigate the local structure in matter, mainly condensed matter in condensed matter physics and solid state chemistry.
Nuclear magnetic resonance (NMR)
NMR spectroscopy uses the net spin of nuclei in a substance upon energy absorption to identify molecules. This has now become a standard spectroscopic tool within synthetic chemistry. One major use of NMR is to determine the bond connectivity within an organic molecule.
NMR imaging also uses the net spin of nuclei (commonly protons) for imaging. This is widely used for diagnostic purposes in medicine, and can provide detailed images of the inside of a person without inflicting any radiation upon them. In a medical setting, NMR is often known simply as "magnetic resonance" imaging, as the word 'nuclear' has negative connotations for many people.
See also
Important publications in nuclear chemistry
Nuclear physics
Nuclear spectroscopy
References
Further reading
Handbook of Nuclear Chemistry
Comprehensive handbook in six volumes by 130 international experts. Edited by Attila Vértes, Sándor Nagy, Zoltán Klencsár, Rezső G. Lovas, Frank Rösch. , Springer, 2011.
Radioactivity Radionuclides Radiation
Textbook by Magill, Galy. , Springer, 2005.
Radiochemistry and Nuclear Chemistry, 3rd Ed
Comprehensive textbook by Choppin, Liljenzin and Rydberg. , Butterworth-Heinemann, 2001 .
Radiochemistry and Nuclear Chemistry, 4th Ed
Comprehensive textbook by Choppin, Liljenzin, Rydberg and Ekberg. , Elsevier Inc., 2013
Radioactivity, Ionizing radiation and Nuclear Energy
Basic textbook for undergraduates by Jiri Hála and James D Navratil. , Konvoj, Brno 2003
The Radiochemical Manual
Overview of the production and uses of both open and sealed sources. Edited by BJ Wilson and written by RJ Bayly, JR Catch, JC Charlton, CC Evans, TT Gorsuch, JC Maynard, LC Myerscough, GR Newbery, H Sheard, CBG Taylor and BJ Wilson. The radiochemical centre (Amersham) was sold via HMSO, 1966 (second edition)
Chemistry | Nuclear chemistry | [
"Physics",
"Chemistry"
] | 5,798 | [
"Nuclear chemistry",
"nan",
"Nuclear physics"
] |
242,006 | https://en.wikipedia.org/wiki/Reaction%20rate | The reaction rate or rate of reaction is the speed at which a chemical reaction takes place, defined as proportional to the increase in the concentration of a product per unit time and to the decrease in the concentration of a reactant per unit time. Reaction rates can vary dramatically. For example, the oxidative rusting of iron under Earth's atmosphere is a slow reaction that can take many years, but the combustion of cellulose in a fire is a reaction that takes place in fractions of a second. For most reactions, the rate decreases as the reaction proceeds. A reaction's rate can be determined by measuring the changes in concentration over time.
Chemical kinetics is the part of physical chemistry that concerns how rates of chemical reactions are measured and predicted, and how reaction-rate data can be used to deduce probable reaction mechanisms. The concepts of chemical kinetics are applied in many disciplines, such as chemical engineering, enzymology and environmental engineering.
Formal definition
Consider a typical balanced chemical reaction:
{\mathit{a}A} + {\mathit{b}B} -> {\mathit{p}P} + {\mathit{q}Q}
The lowercase letters (, , , and ) represent stoichiometric coefficients, while the capital letters represent the reactants ( and ) and the products ( and ).
According to IUPAC's Gold Book definition
the reaction rate for a chemical reaction occurring in a closed system at constant volume, without a build-up of reaction intermediates, is defined as:
where denotes the concentration of the substance or . The reaction rate thus defined has the units of mol/L/s.
The rate of a reaction is always positive. A negative sign is present to indicate that the reactant concentration is decreasing. The IUPAC recommends that the unit of time should always be the second. The rate of reaction differs from the rate of increase of concentration of a product P by a constant factor (the reciprocal of its stoichiometric number) and for a reactant A by minus the reciprocal of the stoichiometric number. The stoichiometric numbers are included so that the defined rate is independent of which reactant or product species is chosen for measurement. For example, if and then is consumed three times more rapidly than , but is uniquely defined. An additional advantage of this definition is that for an elementary and irreversible reaction, is equal to the product of the probability of overcoming the transition state activation energy and the number of times per second the transition state is approached by reactant molecules. When so defined, for an elementary and irreversible reaction, is the rate of successful chemical reaction events leading to the product.
The above definition is only valid for a single reaction, in a closed system of constant volume. If water is added to a pot containing salty water, the concentration of salt decreases, although there is no chemical reaction.
For an open system, the full mass balance must be taken into account:
where
is the inflow rate of in molecules per second;
the outflow;
is the instantaneous reaction rate of (in number concentration rather than molar) in a given differential volume, integrated over the entire system volume at a given moment.
When applied to the closed system at constant volume considered previously, this equation reduces to:
,
where the concentration is related to the number of molecules by Here is the Avogadro constant.
For a single reaction in a closed system of varying volume the so-called rate of conversion can be used, in order to avoid handling concentrations. It is defined as the derivative of the extent of reaction with respect to time.
Here is the stoichiometric coefficient for substance , equal to , , , and in the typical reaction above. Also is the volume of reaction and is the concentration of substance .
When side products or reaction intermediates are formed, the IUPAC recommends the use of the terms the rate of increase of concentration and rate of the decrease of concentration for products and reactants, properly.
Reaction rates may also be defined on a basis that is not the volume of the reactor. When a catalyst is used the reaction rate may be stated on a catalyst weight (mol g−1 s−1) or surface area (mol m−2 s−1) basis. If the basis is a specific catalyst site that may be rigorously counted by a specified method, the rate is given in units of s−1 and is called a turnover frequency.
Influencing factors
Factors that influence the reaction rate are the nature of the reaction, concentration, pressure, reaction order, temperature, solvent, electromagnetic radiation, catalyst, isotopes, surface area, stirring, and diffusion limit. Some reactions are naturally faster than others. The number of reacting species, their physical state (the particles that form solids move much more slowly than those of gases or those in solution), the complexity of the reaction and other factors can greatly influence the rate of a reaction.
Reaction rate increases with concentration, as described by the rate law and explained by collision theory. As reactant concentration increases, the frequency of collision increases. The rate of gaseous reactions increases with pressure, which is, in fact, equivalent to an increase in the concentration of the gas. The reaction rate increases in the direction where there are fewer moles of gas and decreases in the reverse direction. For condensed-phase reactions, the pressure dependence is weak.
The order of the reaction controls how the reactant concentration (or pressure) affects the reaction rate.
Usually conducting a reaction at a higher temperature delivers more energy into the system and increases the reaction rate by causing more collisions between particles, as explained by collision theory. However, the main reason that temperature increases the rate of reaction is that more of the colliding particles will have the necessary activation energy resulting in more successful collisions (when bonds are formed between reactants). The influence of temperature is described by the Arrhenius equation. For example, coal burns in a fireplace in the presence of oxygen, but it does not when it is stored at room temperature. The reaction is spontaneous at low and high temperatures but at room temperature, its rate is so slow that it is negligible. The increase in temperature, as created by a match, allows the reaction to start and then it heats itself because it is exothermic. That is valid for many other fuels, such as methane, butane, and hydrogen.
Reaction rates can be independent of temperature (non-Arrhenius) or decrease with increasing temperature (anti-Arrhenius). Reactions without an activation barrier (for example, some radical reactions), tend to have anti-Arrhenius temperature dependence: the rate constant decreases with increasing temperature.
Many reactions take place in solution and the properties of the solvent affect the reaction rate. The ionic strength also has an effect on the reaction rate.
Electromagnetic radiation is a form of energy. As such, it may speed up the rate or even make a reaction spontaneous as it provides the particles of the reactants with more energy. This energy is in one way or another stored in the reacting particles (it may break bonds, and promote molecules to electronically or vibrationally excited states...) creating intermediate species that react easily. As the intensity of light increases, the particles absorb more energy and hence the rate of reaction increases. For example, when methane reacts with chlorine in the dark, the reaction rate is slow. It can be sped up when the mixture is put under diffused light. In bright sunlight, the reaction is explosive.
The presence of a catalyst increases the reaction rate (in both the forward and reverse reactions) by providing an alternative pathway with a lower activation energy. For example, platinum catalyzes the combustion of hydrogen with oxygen at room temperature.
The kinetic isotope effect consists of a different reaction rate for the same molecule if it has different isotopes, usually hydrogen isotopes, because of the relative mass difference between hydrogen and deuterium.
In reactions on surfaces, which take place, for example, during heterogeneous catalysis, the rate of reaction increases as the surface area does. That is because more particles of the solid are exposed and can be hit by reactant molecules.
Stirring can have a strong effect on the rate of reaction for heterogeneous reactions.
Some reactions are limited by diffusion. All the factors that affect a reaction rate, except for concentration and reaction order, are taken into account in the reaction rate coefficient (the coefficient in the rate equation of the reaction).
Rate equation
For a chemical reaction , the rate equation or rate law is a mathematical expression used in chemical kinetics to link the rate of a reaction to the concentration of each reactant. For a closed system at constant volume, this is often of the form
For reactions that go to completion (which implies very small ), or if only the initial rate is analyzed (with initial vanishing product concentrations), this simplifies to the commonly quoted form
For gas phase reaction the rate equation is often alternatively expressed in terms of partial pressures.
In these equations is the reaction rate coefficient or rate constant, although it is not really a constant, because it includes all the parameters that affect reaction rate, except for time and concentration. Of all the parameters influencing reaction rates, temperature is normally the most important one and is accounted for by the Arrhenius equation.
The exponents and are called reaction orders and depend on the reaction mechanism. For an elementary (single-step) reaction, the order with respect to each reactant is equal to its stoichiometric coefficient. For complex (multistep) reactions, however, this is often not true and the rate equation is determined by the detailed mechanism, as illustrated below for the reaction of H2 and NO.
For elementary reactions or reaction steps, the order and stoichiometric coefficient are both equal to the molecularity or number of molecules participating. For a unimolecular reaction or step, the rate is proportional to the concentration of molecules of reactant, so the rate law is first order. For a bimolecular reaction or step, the number of collisions is proportional to the product of the two reactant concentrations, or second order. A termolecular step is predicted to be third order, but also very slow as simultaneous collisions of three molecules are rare.
By using the mass balance for the system in which the reaction occurs, an expression for the rate of change in concentration can be derived. For a closed system with constant volume, such an expression can look like
Example of a complex reaction: hydrogen and nitric oxide
For the reaction
the observed rate equation (or rate expression) is
As for many reactions, the experimental rate equation does not simply reflect the stoichiometric coefficients in the overall reaction: It is third order overall: first order in H2 and second order in NO, even though the stoichiometric coefficients of both reactants are equal to 2.
In chemical kinetics, the overall reaction rate is often explained using a mechanism consisting of a number of elementary steps. Not all of these steps affect the rate of reaction; normally the slowest elementary step controls the reaction rate. For this example, a possible mechanism is
Reactions 1 and 3 are very rapid compared to the second, so the slow reaction 2 is the rate-determining step. This is a bimolecular elementary reaction whose rate is given by the second-order equation
where is the rate constant for the second step.
However N2O2 is an unstable intermediate whose concentration is determined by the fact that the first step is in equilibrium, so that where is the equilibrium constant of the first step. Substitution of this equation in the previous equation leads to a rate equation expressed in terms of the original reactants
This agrees with the form of the observed rate equation if it is assumed that . In practice the rate equation is used to suggest possible mechanisms which predict a rate equation in agreement with experiment.
The second molecule of H2 does not appear in the rate equation because it reacts in the third step, which is a rapid step after the rate-determining step, so that it does not affect the overall reaction rate.
Temperature dependence
Each reaction rate coefficient has a temperature dependency, which is usually given by the Arrhenius equation:
where
, is the pre-exponential factor or frequency factor,
is the exponential function,
is the activation energy,
is the gas constant.
Since at temperature the molecules have energies given by a Boltzmann distribution, one can expect the number of collisions with energy greater than to be proportional to .
The values for and are dependent on the reaction. There are also more complex equations possible, which describe the temperature dependence of other rate constants that do not follow this pattern.
Temperature is a measure of the average kinetic energy of the reactants. As temperature increases, the kinetic energy of the reactants increases. That is, the particles move faster. With the reactants moving faster this allows more collisions to take place at a greater speed, so the chance of reactants forming into products increases, which in turn results in the rate of reaction increasing. A rise of ten degrees Celsius results in approximately twice the reaction rate.
The minimum kinetic energy required for a reaction to occur is called the activation energy and is denoted by or . The transition state or activated complex shown on the diagram is the energy barrier that must be overcome when changing reactants into products. The molecules with an energy greater than this barrier have enough energy to react.
For a successful collision to take place, the collision geometry must be right, meaning the reactant molecules must face the right way so the activated complex can be formed.
A chemical reaction takes place only when the reacting particles collide. However, not all collisions are effective in causing the reaction. Products are formed only when the colliding particles possess a certain minimum energy called threshold energy. As a rule of thumb, reaction rates for many reactions double for every ten degrees Celsius increase in temperature. For a given reaction, the ratio of its rate constant at a higher temperature to its rate constant at a lower temperature is known as its temperature coefficient, (). Q10 is commonly used as the ratio of rate constants that are ten degrees Celsius apart.
Pressure dependence
The pressure dependence of the rate constant for condensed-phase reactions (that is, when reactants and products are solids or liquid) is usually sufficiently weak in the range of pressures normally encountered in industry that it is neglected in practice.
The pressure dependence of the rate constant is associated with the activation volume. For the reaction proceeding through an activation-state complex:
the activation volume, , is:
where denotes the partial molar volume of a species and (a double dagger) indicates the activation-state complex.
For the above reaction, one can expect the change of the reaction rate constant (based either on mole fraction or on molar concentration) with pressure at constant temperature to be:
In practice, the matter can be complicated because the partial molar volumes and the activation volume can themselves be a function of pressure.
Reactions can increase or decrease their rates with pressure, depending on the value of . As an example of the possible magnitude of the pressure effect, some organic reactions were shown to double the reaction rate when the pressure was increased from atmospheric (0.1 MPa) to 50 MPa (which gives −0.025 L/mol).
See also
Diffusion-controlled reaction
Dilution (equation)
Isothermal microcalorimetry
Rate of solution
Steady state approximation
Notes
External links
Chemical kinetics, reaction rate, and order (needs flash player)
Reaction kinetics, examples of important rate laws (lecture with audio).
Rates of reaction
Overview of Bimolecular Reactions (Reactions involving two reactants)
pressure dependence Can. J. Chem.
Chemical kinetics
Chemical reaction engineering
Temporal rates | Reaction rate | [
"Physics",
"Chemistry",
"Engineering"
] | 3,217 | [
"Temporal quantities",
"Chemical reaction engineering",
"Physical quantities",
"Chemical engineering",
"Temporal rates",
"Chemical kinetics"
] |
242,282 | https://en.wikipedia.org/wiki/Nucleocosmochronology | Nucleocosmochronology, or nuclear cosmochronology, is a technique used to determine timescales for astrophysical objects and events based on observed ratios of radioactive heavy elements and their decay products. It is similar in many respects to radiometric dating, in which trace radioactive impurities were selectively incorporated into materials when they were formed.
To calculate the age of formation of astronomical objects, the observed ratios of abundances of heavy radioactive and stable nuclides are compared to the primordial ratios predicted by nucleosynthesis theory. Both radioactive elements and their decay products matter, and some important elements include the long-lived radioactive nuclei Th-232, U-235, and U-238, all formed by the r-process. The process has been compared to radiocarbon dating. The age of the objects are determined by placing constraints on the duration of nucleosynthesis in the galaxy.
Nucleocosmochronology has been employed to determine the age of the Sun ( billion years) and of the Galactic thin disk ( billion years), among other objects. It has also been used to estimate the age of the Milky Way itself by studying Cayrel's Star in the Galactic halo, which due to its low metallicity, is believed to have formed early in the history of the Galaxy.
Limiting factors in its precision are the quality of observations of faint stars and the uncertainty of the primordial abundances of r-process elements.
History
The first use of nuclear cosmochronology was in 1929, by Ernest Rutherford, who, shortly after the discovery that uranium has two naturally occurring radioactive isotopes with different half-lives, attempted to use the ratio to determine when the uranium had been produced. He suggested that both had been produced in equal abundances, assuming they had been produced in a single moment in time, and applied an argument based on incorrect assumptions about astrophysics to derive an incorrect age of about 6 billion years. He pioneered the idea that age could be calculated by the ratio of abundances of radioactive parent elements and their stable decay products.
According to a tribute written by colleagues, a large part of the modern science of nuclear cosmochronology grew out of work by John Reynolds and his students.
Model-independent techniques were developed in 1970.
Technique
It is necessarily to know the initial ratios by which nucleosynthesis produce radioactive parent elements in comparison to the stable elements they decay to, before decay occurs. These are the abundances which the elements would have if the radioactive parent elements were stable, and not producing daughter nuclei. The ratio of the abundance of radioactive elements to the abundance they would have if they were stable is called the remainder. Measurement of the current abundances of elements in objects, combined with nucleosynthesis theory, determines the remainders.
See also
Astrochemistry
Astronomical chronology
Geochronology
Gyrochronology
References
Dating methods
Astrophysics
Nuclear physics | Nucleocosmochronology | [
"Physics",
"Astronomy"
] | 598 | [
"Astronomical sub-disciplines",
"Astrophysics",
"Nuclear physics"
] |
22,103,184 | https://en.wikipedia.org/wiki/GHS%20hazard%20statements | Hazard statements form part of the Globally Harmonized System of Classification and Labelling of Chemicals (GHS). They are intended to form a set of standardized phrases about the hazards of chemical substances and mixtures that can be translated into different languages. As such, they serve the same purpose as the well-known R-phrases, which they are intended to replace.
Hazard statements are one of the key elements for the labelling of containers under the GHS, along with:
an identification of the product
one or more hazard pictograms (where necessary)
a signal word – either Danger or Warning – where necessary
precautionary statements, indicating how the product should be handled to minimize risks to the user (as well as to other people and the general environment)
the identity of the supplier (who might be a manufacturer or importer).
Each hazard statement is designated a code, starting with the letter H and followed by three digits. Statements which correspond to related hazards are grouped together by code number, so the numbering is not consecutive. The code is used for reference purposes, for example to help with translations, but it is the actual phrase which should appear on labels and safety data sheets.
Physical hazards
Health hazards
Environmental hazards
Country-specific hazard statements
European Union
The European Union has implemented the GHS through the CLP Regulation. Nevertheless, the older system based on the Dangerous Substances Directive was used in parallel until June 2015. Some R-phrases which do not have simple equivalents under the GHS have been retained under the CLP Regulation: the numbering mirrors the number of the previous R-phrase.
Physical properties
EUH006: Explosive with or without contact with air, deleted in the fourth adaptation to technical progress of CLP.
EUH014: Reacts violently with water
EUH018: In use may form flammable/explosive vapour-air mixture
EUH019: May form explosive peroxides
EUH044: Risk of explosion if heated under confinement
Health properties
EUH029: Contact with water liberates toxic gas
EUH031: Contact with acids liberates toxic gas
EUH032: Contact with acids liberates very toxic gas
EUH066: Repeated exposure may cause skin dryness or cracking
EUH070: Toxic by eye contact
EUH071: Corrosive to the respiratory tract
EUH380: May cause endocrine disruption in humans
EUH381: Suspected of causing endocrine disruption in humans
Environmental properties
EUH059: Hazardous to the ozone layer, superseded by GHS Class 5.1 in the second adaptation to technical progress of CLP.
EUH430: May cause endocrine disruption in the environment
EUH431: Suspected of causing endocrine disruption in the environment
EUH440: Accumulates in the environment and living organisms including in humans
EUH441: Strongly accumulates in the environment and living organisms including in humans
EUH450: Can cause long-lasting and diffuse contamination of water resources
EUH451: Can cause very long-lasting and diffuse contamination of water resources
Other EU hazard statements
Some other hazard statements intended for use in very specific circumstances have also been retained under the CLP Regulation. In this case, the numbering of the EU specific hazard statements can coincide with GHS hazard statements if the "EU" prefix is not included.
EUH201: Contains lead. Should not be used on surfaces liable to be chewed or sucked by children.
EUH201A: Warning! Contains lead.
EUH202: Cyanoacrylate. Danger. Bonds skin and eyes in seconds. Keep out of the reach of children.
EUH203: Contains chromium(VI). May produce an allergic reaction.
EUH204: Contains isocyanates. May produce an allergic reaction.
EUH205: Contains epoxy constituents. May produce an allergic reaction.
EUH206: Warning! Do not use together with other products. May release dangerous gases (chlorine).
EUH207: Warning! Contains cadmium. Dangerous fumes are formed during use. See information supplied by the manufacturer. Comply with the safety instructions.
EUH208: Contains <name of sensitising substance>. May produce an allergic reaction.
EUH209: Can become highly flammable in use.
EUH209A: Can become flammable in use.
EUH210: Safety data sheet available on request.
EUH211: Warning! Hazardous respirable droplets may be formed when sprayed. Do not breathe spray or mist.
EUH212: Warning! Hazardous respirable dust may be formed when used. Do not breathe dust.
EUH401: To avoid risks to human health and the environment, comply with the instructions for use.
Australia
The GHS was adopted in Australia from 1 January 2012 and becomes mandatory in States and Territories that have adopted the harmonised Work Health and Safety laws (other than Victoria and Western Australia) as of 1 January 2017. The National Code of Practice for the Preparation of Safety Data Sheets for Hazardous Chemicals includes 12 Australian-specific GHS Hazard Statements, as follows:
Physical hazard statements
AUH001: Explosive without moisture
AUH006: Explosive with or without contact with air
AUH014: Reacts violently with water
AUH018: In use, may form a flammable/explosive vapor-air mixture
AUH019: May form explosive peroxides
AUH044: Risk of explosion if heated under confinement
Human health hazard statements
AUH029: Contact with water liberates toxic gas
AUH031: Contact with acids liberates toxic gas
Additional non-GHS hazard statements
AUH032: Contact with acids liberates very toxic gas
AUH066: Repeated exposure may cause skin dryness or cracking
AUH070: Toxic by eye contact
AUH071: Corrosive to the respiratory tract
New Zealand
As of March 2009, the relevant New Zealand regulations under the Hazardous Substances and New Organisms Act 1996 do not specify the exact wording required for hazard statements. However, the New Zealand classification system includes three categories of environmental hazard which are not included in the GHS Rev.2:
Ecotoxicity to soil environment
Ecotoxicity to terrestrial vertebrates
Ecotoxicity to terrestrial invertebrates
These are classes 9.2–9.4 respectively of the New Zealand classification scheme, and are divided into subclasses according to the degree of hazard. Substances in subclass 9.2D ("Substances that are slightly harmful in the soil environment") do not require a hazard statement, while substances in the other subclasses require an indication of the general degree of hazard and general type of hazard.
Notes
References
("GHS Rev.4")
("GHS Rev.2")
(New Zealand)
(New Zealand)
(the "CLP Regulation")
External links
Chemical Hazard & Precautionary Phrases in 23 European Languages, machine-readable and versioned
Hazard statements | GHS hazard statements | [
"Chemistry"
] | 1,448 | [
"Globally Harmonized System"
] |
22,103,504 | https://en.wikipedia.org/wiki/Goss%20zeta%20function | In the field of mathematics, the Goss zeta function, named after David Goss, is an analogue of the Riemann zeta function for function fields. proved that it satisfies an analogue of the Riemann hypothesis. proved results for a higher-dimensional generalization of the Goss zeta function.
References
Zeta and L-functions | Goss zeta function | [
"Mathematics"
] | 69 | [
"Algebra stubs",
"Algebra"
] |
1,206,461 | https://en.wikipedia.org/wiki/ChorusOS | ChorusOS is a microkernel real-time operating system designed as a message passing computing model. ChorusOS began as the Chorus distributed real-time operating system research project at the French Institute for Research in Computer Science and Automation (INRIA) in 1979. During the 1980s, Chorus was one of two earliest microkernels (the other being Mach) and was developed commercially by startup company Chorus Systèmes SA. Over time, development effort shifted away from distribution aspects to real-time for embedded systems.
In 1997, Sun Microsystems acquired Chorus Systèmes for its microkernel technology, which went toward the new JavaOS. Sun (and henceforth Oracle) no longer supports ChorusOS. The founders of Chorus Systèmes started a new company called Jaluna in August 2002. Jaluna then became VirtualLogix, which was then acquired by Red Bend in September 2010. VirtualLogix designed embedded systems using Linux and ChorusOS (which they named VirtualLogix C5). C5 was described by them as a carrier grade operating system, and was actively maintained by them.
The latest source tree of ChorusOS, an evolution of version 5.0, was released as open-source software by Sun and is available at the Sun Download Center. The Jaluna project has completed these sources and published it online. Jaluna-1 is described there as a real-time Portable Operating System Interface (RT-POSIX) layer based on FreeBSD 4.1, and the CDE cross-platform software development environment. ChorusOS is supported by popular Secure Socket Layer and Transport Layer Security (SSL/TLS) libraries such as wolfSSL.
See also
JavaOS
References
Distributed operating systems
French inventions
Microkernel-based operating systems
Microkernels
Real-time operating systems
Sun Microsystems software
ARM operating systems
X86 operating systems | ChorusOS | [
"Technology"
] | 379 | [
"Real-time computing",
"Real-time operating systems"
] |
1,206,615 | https://en.wikipedia.org/wiki/Micro%20pitting | Micro pitting is a fatigue failure of the surface of a material commonly seen in rolling bearings and gears.
It is also known as grey staining, micro spalling or frosting.
Pitting and micropitting
The difference between pitting and micropitting is the size of the pits after surface fatigue. Pits formed by micropitting are approximately 10–20 μm in depth, and to the unaided eye, micropitting appears dull, etched or stained, with patches of gray. Normal pitting creates larger and more visible pits. Micropits are originated from the local contact of asperities produced by improper lubrication.
Causes
In a normal bearing the surfaces are separated by a layer of oil, this is known as elastohydrodynamic (EHD) lubrication. If the thickness of the EHD film is of the same order of magnitude as the surface roughness, the surface topography is able to interact and cause micro pitting. A thin EHD film may be caused by excess load or temperature, a lower oil viscosity than is required, low speed or water in the oil. Water in the oil can make micro pitting worse by causing hydrogen embrittlement of the surface. Micro pitting occurs only under poor EHD lubrication conditions and although it can affect all types of gears, it can be particularly troublesome in heavily loaded gears with hardened teeth.
A surface with a deep scratch might break exactly at the scratch if stress is applied. One can imagine that the surface roughness is a composite of many very small scratches. So high surface roughness decreases the stability on heavy stressed parts. To get a good overview of the surface an areal scan (Surface metrology) gives more information that a measurement along a single profile (profileometer). To quantify the surface roughness the ISO 25178 can be used.
See also
Pitting corrosion
Corrosion
References
Corrosion
Materials degradation | Micro pitting | [
"Chemistry",
"Materials_science",
"Engineering"
] | 395 | [
"Materials science stubs",
"Metallurgy",
"Materials science",
"Corrosion",
"Electrochemistry",
"Electrochemistry stubs",
"Materials degradation",
"Physical chemistry stubs",
"Chemical process stubs"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.