id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
4,248,526 | https://en.wikipedia.org/wiki/Computational%20aeroacoustics | Computational aeroacoustics is a branch of aeroacoustics that aims to analyze the generation of noise by turbulent flows through numerical methods.
History
The origin of computational aeroacoustics can only very likely be dated back to the middle of the 1980s, with a publication of Hardin and Lamkin who claimed, that "[...] the field of computational fluid mechanics has been advancing rapidly in the past few years and now offers the hope that "computational aeroacoustics," where noise is computed directly from a first principles determination of continuous velocity and vorticity fields, might be possible, [...]"
Later in a publication 1986 the same authors introduced the abbreviation CAA. The term was initially used for a low Mach number approach (Expansion of the acoustic perturbation field about an incompressible flow) as it is described under EIF. Later in the beginning 1990s the growing CAA community picked up the term and extensively used it for any kind of numerical method describing the noise radiation from an aeroacoustic source or the propagation of sound waves in an inhomogeneous flow field. Such numerical methods can be far field integration methods (e.g. FW-H) as well as direct numerical methods optimized for the solutions (e.g.) of a mathematical model describing the aerodynamic noise generation and/or propagation. With the rapid development of the computational resources this field has undergone spectacular progress during the last three decades.
Methods
Direct numerical simulation (DNS) approach to CAA
The compressible Navier-Stokes equation describes both the flow field, and the aerodynamically generated acoustic field. Thus both may be solved for directly. This requires very high numerical resolution due to the large differences in the length scale present between the acoustic variables and the flow variables. It is computationally very demanding and unsuitable for any commercial use.
Hybrid approach
In this approach the computational domain is split into different regions, such that the governing acoustic or flow field can be solved with different equations and numerical techniques. This would involve using two different numerical solvers, first a dedicated Computational fluid dynamics (CFD) tool and secondly an acoustic solver. The flow field is then used to calculate the acoustical sources. Both steady state (RANS, SNGR (Stochastic Noise Generation and Radiation), ...) and transient (DNS, LES, DES, URANS, ...) fluid field solutions can be used. These acoustical sources are provided to the second solver which calculates the acoustical propagation. Acoustic propagation can be calculated using one of the following methods:
Integral methods
Lighthill's analogy
Kirchhoff integral
FW-H
LEE
Pseudospectral
EIF
APE
Integral methods
There are multiple methods, which are based on a known solution of the acoustic wave equation to compute the acoustic far field of a sound source. Because a general solution for wave propagation in the free space can be written as an integral over all sources, these solutions are summarized as integral methods. The acoustic sources have to be known from some different source (e.g. a Finite Element simulation of a moving mechanical system or a fluid dynamic CFD simulation of the sources in a moving medium). The integral is taken over all sources at the retarded time (source time), which is the time at that the source is sent out the signal, which arrives now at a given observer position. Common to all integral methods is, that they cannot account for changes in the speed of sound or the average flow speed between source and observer position as they use a theoretical solution of the wave equation. When applying Lighthill's theory to the Navier Stokes equations of Fluid mechanics, one obtains volumetric sources, whereas the other two analogies provide the far field information based on a surface integral. Acoustic analogies can be very efficient and fast, as the known solution of the wave equation is used. One far away observer takes as long as one very close observer. Common for the application of all analogies is the integration over a large number of contributions, which can lead to additional numerical problems (addition/subtraction of many large numbers with result close to zero.) Furthermore, when applying an integral method, usually the source domain is limited somehow. While in theory the sources outside have to be zero, the application can not always fulfill this condition. Especially in connection with CFD simulations, this leads to large cut-off errors. By damping the source gradually to zero at the exit of the domain or adding some additional terms to correct this end-effect, these cut-off errors can be minimized.
Lighthill's analogy
Also called 'Acoustic Analogy'. To obtain Lighthill's aeroacoustic analogy the governing Navier-Stokes equations are rearranged. The left hand side is a wave operator, which is applied to the density perturbation or pressure perturbation respectively. The right hand side is identified as the acoustic sources in a fluid flow, then. As Lighthill's analogy follows directly from the Navier-Stokes equations without simplification, all sources are present. Some of the sources are then identified as turbulent or laminar noise. The far-field sound pressure is then given in terms of a volume integral over the domain containing the sound source. The source term always includes physical sources and such sources, which describe the propagation in an inhomogeneous medium.
The wave operator of Lighthill's analogy is limited to constant flow conditions outside the source zone. No variation of density, speed of sound and Mach number is allowed. Different mean flow conditions are identified as strong sources with opposite sign by the analogy, once an acoustic wave passes it. Part of the acoustic wave is removed by one source and a new wave is radiated to fix the different wave speed. This often leads very large volumes with strong sources. Several modifications to Lighthill's original theory have been proposed to account for the sound-flow interaction or other effects. To improve Lighthill's analogy different quantities inside the wave operator as well as different wave operators are considered by following analogies. All of them obtain modified source terms, which sometimes allow a more clear sight on the "real" sources. The acoustic analogies of Lilley, Pierce, Howe and Möhring are only some examples for aeroacoustic analogies based on Lighthill's ideas. All acoustic analogies require a volume integration over a source term.
The major difficulty with the acoustic analogy, however, is that the sound source is not compact in supersonic flow. Errors could be encountered in calculating the sound field, unless the computational domain could be extended in the downstream direction beyond the location where the sound source has completely decayed. Furthermore, an accurate account of the retarded time-effect requires keeping a long record of the time-history of the converged solutions of the sound source, which again represents a storage problem. For realistic problems, the required storage can reach the order of 1 terabyte of data.
Kirchhoff integral
Kirchhoff and Helmholtz showed, that the radiation of sound from a limited source region can be described by enclosing this source region by a control surface - the so-called Kirchhoff surface. Then the sound field inside or outside the surface, where no sources are allowed and the wave operator on the left hand side applies, can be produced as a superposition of monopoles and dipoles on the surface. The theory follows directly from the wave equation. The source strength of monopoles and dipoles on the surface can be calculated if the normal velocity (for monopoles) and the pressure (for dipoles) on the surface are known respectively. A modification of the method allows even to calculate the pressure on the surface based on the normal velocity only. The normal velocity could be given by a FE-simulation of a moving structure for instance. However, the modification to avoid the acoustic pressure on the surface to be known leads to problems, when considering an enclosed volume at its resonant frequencies, which is a major issue of the implementations of their method. The Kirchhoff integral method finds for instance application in Boundary element methods (BEM). A non-zero flow velocity is accounted by considering a moving frame of reference with the outer flow speed, in which the acoustic wave propagation takes place. Repetitive applications of the method can account for obstacles. First the sound field on the surface of the obstacle is calculated and then the obstacle is introduced by adding sources on its surface to cancel the normal velocity on the surface of the obstacle. Variations of the average flow field (speed of sound, density and velocity) can be taken into account by a similar method (e.g. dual reciprocity BEM).
FW-H
The integration method of Ffowcs Williams and Hawkings is based on Lighthill's acoustic analogy. However, by some mathematical modifications under the assumption of a limited source region, which is enclosed by a control surface (FW-H surface), the volume integral is avoided. Surface integrals over monopole and dipole sources remain. Different from the Kirchhoff method, these sources follow directly from the Navier-Stokes equations through Lighthill's analogy. Sources outside the FW-H surface can be accounted by an additional volume integral over quadrupole sources following from the Lighthill Tensor. However, when considering the same assumptions as Kirchhoffs linear theory, the FW-H method equals the Kirchhoff method.
Linearized Euler Equations
Considering small disturbances superimposed on a uniform mean flow of density , pressure and velocity on x-axis , the Euler equations for a two dimensional model is presented as:
,
where
where , , and are the acoustic field variables, the ratio of specific heats , for air at 20 °C , and the source term on the right-side represents distributed unsteady sources.
The application of LEE can be found in engine noise studies.
For high Mach number flows in compressible regimes, the acoustic propagation may be influenced by non-linearities and the LEE may no longer be the appropriate mathematical model.
Pseudospectral
A Fourier pseudospectral time-domain method can be applied to wave propagation problems pertinent to computational aeroacoustics. The original algorithm of the Fourier pseudo spectral time domain method works for periodical problems without the interaction with physical boundaries. A slip wall boundary condition, combined with buffer zone technique to solve some non-periodical aeroacoustic problems has been proposed. Compared to other computational methods, pseudospectral method is preferred for its high-order accuracy.
EIF
Expansion about Incompressible Flow
APE
Acoustic Perturbation Equations
Refer to the paper "Acoustic Perturbation Equations Based on Flow Decomposition via Source Filtering" by R.Ewert and W.Schroder.
See also
Aeroacoustics
Acoustic theory
References
Sources
Lighthill, M. J., "A General Introduction to Aeroacoustics and Atmospheric Sounds", ICASE Report 92-52, NASA Langley Research Centre, Hampton, VA, 1992
External links
Examples in Aeroacoustics from NASA
Computational Aeroacoustics at the Ecole Centrale de Lyon
Computational Aeroacoustics at the University of Leuven
Computational Aeroacoustics at Technische Universität Berlin
A CAA lecture script of Technische Universität Berlin
Computational fluid dynamics
Acoustics
Aerodynamics
Mechanics
Computational fields of study | Computational aeroacoustics | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 2,332 | [
"Computational fields of study",
"Computational fluid dynamics",
"Classical mechanics",
"Acoustics",
"Computational physics",
"Aerodynamics",
"Mechanics",
"Computing and society",
"Mechanical engineering",
"Aerospace engineering",
"Fluid dynamics"
] |
14,721,784 | https://en.wikipedia.org/wiki/Coupon%20collector%27s%20problem | In probability theory, the coupon collector's problem refers to mathematical analysis of "collect all coupons and win" contests. It asks the following question: if each box of a given product (e.g., breakfast cereals) contains a coupon, and there are n different types of coupons, what is the probability that more than t boxes need to be bought to collect all n coupons? An alternative statement is: given n coupons, how many coupons do you expect you need to draw with replacement before having drawn each coupon at least once? The mathematical analysis of the problem reveals that the expected number of trials needed grows as . For example, when n = 50 it takes about 225 trials on average to collect all 50 coupons.
Solution
Via generating functions
By definition of Stirling numbers of the second kind, the probability that exactly T draws are needed isBy manipulating the generating function of the Stirling numbers, we can explicitly calculate all moments of T:In general, the k-th moment is , where is the derivative operator .
For example, the 0th moment isand the 1st moment is , which can be explicitly evaluated to , etc.
Calculating the expectation
Let time T be the number of draws needed to collect all n coupons, and let ti be the time to collect the i-th coupon after i − 1 coupons have been collected. Then . Think of T and ti as random variables. Observe that the probability of collecting a coupon is . Therefore, has geometric distribution with expectation . By the linearity of expectations we have:
Here Hn is the n-th harmonic number. Using the asymptotics of the harmonic numbers, we obtain:
where is the Euler–Mascheroni constant.
Using the Markov inequality to bound the desired probability:
The above can be modified slightly to handle the case when we've already collected some of the coupons. Let k be the number of coupons already collected, then:
And when then we get the original result.
Calculating the variance
Using the independence of random variables ti, we obtain:
since (see Basel problem).
Bound the desired probability using the Chebyshev inequality:
Tail estimates
A stronger tail estimate for the upper tail be obtained as follows. Let denote the event that the -th coupon was not picked in the first trials. Then
Thus, for , we have . Via a union bound over the coupons, we obtain
Extensions and generalizations
Pierre-Simon Laplace, but also Paul Erdős and Alfréd Rényi, proved the limit theorem for the distribution of T. This result is a further extension of previous bounds. A proof is found in.
which is a Gumbel distribution. A simple proof by martingales is in the next section.
Donald J. Newman and Lawrence Shepp gave a generalization of the coupon collector's problem when m copies of each coupon need to be collected. Let Tm be the first time m copies of each coupon are collected. They showed that the expectation in this case satisfies:
Here m is fixed. When m = 1 we get the earlier formula for the expectation.
Common generalization, also due to Erdős and Rényi:
In the general case of a nonuniform probability distribution, according to Philippe Flajolet et al.
This is equal to
where m denotes the number of coupons to be collected and PJ denotes the probability of getting any coupon in the set of coupons J.
See also
McDonald's Monopoly – an example of the coupon collector's problem that further increases the challenge by making some coupons of the set rarer
Watterson estimator
Birthday problem
Notes
References
.
.
.
.
.
.
.
External links
"Coupon Collector Problem" by Ed Pegg, Jr., the Wolfram Demonstrations Project. Mathematica package.
How Many Singles, Doubles, Triples, Etc., Should The Coupon Collector Expect?, a short note by Doron Zeilberger.
Articles containing proofs
Gambling mathematics
Probability theorems
Probability problems | Coupon collector's problem | [
"Mathematics"
] | 826 | [
"Theorems in probability theory",
"Probability problems",
"Mathematical problems",
"Articles containing proofs",
"Mathematical theorems"
] |
14,722,571 | https://en.wikipedia.org/wiki/Analog%20sequencer | An analog sequencer is a music sequencer constructed from analog (analogue) electronics, invented in the first half of the 20th century.
Raymond Scott designed and constructed some of the first electro-mechanical music sequencers in the 1940s. The first electronic sequencer was invented by Raymond Scott, using thyratrons and relays. Incidentally in 1951, computer music was started from the music sequencing, and later its applicable fields were expanded into the music composition and sound generation. However, the RCA Mark II Sound Synthesizer in 1957 was still indirectly controlled via punch-tape system similar to piano rolls, a kind of mechanical sequencer.
Also, in earlier electronic music, artists used sound-on-film technology to generate sound waves as well as control sequences of notes.
At its most basic, an analog sequencer consists of a bank of potentiometers and a "clock" (pulse generator) connected to a sequencer, which steps through these potentiometers one at a time and then cycles back to the beginning. The output from the above is fed (as a control voltage and gate pulse) to a synthesizer. By "tuning" the potentiometers, a short repetitive rhythmic motif or riff can be set up.
The most commonly used analog sequencer was the Moog 960, which was a module of the Moog modular synthesizer. It consisted of three parallel banks of eight potentiometers: the three banks could either steer three different Voltage-controlled oscillators (VCOs) to allow three-note chords in the sequence, or (for example) one row could steer pitch while the second row is patched through to the filter cutoff or VCA volume, and a third steers filter cutoff for a white noise generator (thus creating an extremely primitive electronic drum track).
Under each of the eight steps, a switch offered three options: play this step, skip this step, or loop back to the beginning. To avoid the monotony of endlessly repeated sequences, pioneering electronic musicians like Chris Franke of Tangerine Dream and Michael Hoenig would manipulate these switches in real time during performance, adding and dropping notes and beats from a sequence. Also, the "pitch" row can be patched to two or more oscillators tuned to intervals, and the oscillators mixed in and out one at a time.
Good examples of all these techniques can be heard on the Phaedra, Rubycon, Ricochet, and Encore albums of Tangerine Dream, as well as on Departure from the Northern Wasteland by Michael Hoenig.
By synchronizing two sequencers, and manipulating them individually, swirling polyrhythmic phasing patterns (as introduced in minimalist music by Steve Reich) can be set up. The title track of the Michael Hoenig album (mentioned above) is an excellent example.
An additional module (Moog 962) allowed "daisy-chaining" the three rows to form one longer 24-step sequence. In addition, a switch on the 960 itself let the third (bottom) row be used for note lengths.
The output voltage of the sequencer can be added to the output voltage of a keyboard controller, and the latter used to transpose the sequence on the fly. Klaus Schulze was particularly fond of this technique, which lays the musical foundation for tracks like "Bayreuth Return" from Timewind, "Floating" from Moondawn, and any rhythmic piece from Klaus Schulze's "analog" years. Vangelis and Jean-Michel Jarre likewise availed themselves of this technique.
Apart from a temperature-controlled environment after warmup, pitch stability could be problematic. On the famous opening of Phaedra, the sequencer had drifted out of tune, and one can clearly hear Chris Franke retuning the sequence by ear in real time.
In addition to the 1027 module, which is a conventional 3x10 step sequencer, the ARP 2500 was often equipped with the 1050 Mix-Sequencer module. Unlike contemporary sequencers that only generated voltages, the 1050 could also sequence audio signals. This allowed each step of the sequence to come from a completely different sound source. The 8 positions could run in sequence or be split into two independent four-step sequencers. It's easily identified by its vertical column of 8 square white buttons that light up to indicate the active step(s).
Analog sequencers, have in some respects, been replaced by digital devices and software implementations. However, there is a continued interest by modular analog synthesists, who appreciate the real time control offered by the analog sequencer as evidenced by the 'Oberkorn' machine by Analog Solutions, amongst others.
Various analog sequencers
See also
Music sequencer
Modular synthesizer
Notes
External links
Silicon sequences, a video clip demonstrating realtime sequence(r) manipulation
Images and specifications of Moog 960 clone
Synthesiser modules
Music sequencers | Analog sequencer | [
"Engineering"
] | 1,009 | [
"Music sequencers",
"Automation"
] |
14,723,619 | https://en.wikipedia.org/wiki/Cyclin-dependent%20kinase%20inhibitor%201C | Cyclin-dependent kinase inhibitor 1C (p57, Kip2), also known as CDKN1C, is a protein which in humans is encoded by the CDKN1C imprinted gene.
Function
Cyclin-dependent kinase inhibitor 1C is a tight-binding inhibitor of several G1 cyclin/Cdk complexes and a negative regulator of cell proliferation. Mutations of CDKN1C are implicated in sporadic cancers and Beckwith-Wiedemann syndrome suggesting that it is a tumor suppressor candidate.
CDKN1C is a tumor suppressor human gene on chromosome 11 (11p15) and belongs to the cip/kip gene family. It encodes a cell cycle inhibitor that binds to G1 cyclin-CDK complexes. Thus p57KIP2 causes arrest of the cell cycle in G1 phase.
CDKN1C was found to lead to cancer cell dormancy; its gene expression is regulated through the activity of glucocorticoid receptors (GRs) through chromatin remodelling mediated by SWI/SNF.
Research Methods
Since it has been identified that mutation to this tumor suppressing gene can have dramatic effects in a newborn such as macroglossia there has been great research to determine the genetic significance. CDKN1C is prone to error during the process of gene imprinting. The process of gene imprinting is in concert with DNA methylation. This goes makes the gene become transcriptionally silent from the paternal side allowing the maternal gene to be active. If this gene fails to be properly methylated, or obtains a mutation, there will be a lack of cell cycle suppression leading to the pediatric tumor growth.
Research methods for this gene have involved different sequencing methods such as Sanger Sequencing. This sequencing method is a three step process that involves PCR, Gel Electrophoresis, and computer analysis to determine DNA sequences. Sequencing can be helpful in identifying base pair mutations. A study done to assess the phenotypic effects that mutations to this gene will have taken genetic sequencing of a cohort of individuals known to be effected by a mutation on this gene. In this study, they found 37 mutations associated with 38 different pedigrees. This went to prove that mutations to the CDKN1C on chromosome 11 would in fact have phenotypic effects on individuals. These effects are further discussed through the different clinical cases that can occur.
Clinical significance
A mutation of this gene may lead to loss of control over the cell cycle leading to uncontrolled cellular proliferation. p57KIP2 has been associated with Beckwith-Wiedemann syndrome (BWS) which is characterized by increased risk of tumor formation in childhood. Loss-of-function mutations in this gene have also been shown associated to the IMAGe syndrome (Intrauterine growth restriction, Metaphyseal dysplasia, Adrenal hypoplasia congenita, and Genital anomalies). Complete hydatidiform moles consist only of paternal DNA, and thus the cells lack p57 expression as the gene is paternally imprinted (silenced). Immunohistochemical stains for p57 can aid with the diagnosis of hydatidiform moles.
Interactions
Cyclin-dependent kinase inhibitor 1C has been shown to interact with:
LIMK1,
MYBL2,
MyoD, and
PCNA.
References
Further reading
External links
GeneReviews/NIH/NCBI/UW entry on Beckwith-Wiedemann Syndrome
Cell cycle
Tumor suppressor genes | Cyclin-dependent kinase inhibitor 1C | [
"Biology"
] | 740 | [
"Cell cycle",
"Cellular processes"
] |
14,723,907 | https://en.wikipedia.org/wiki/Calcium-activated%20potassium%20channel%20subunit%20alpha-1 | Calcium-activated potassium channel subunit alpha-1 also known as large conductance calcium-activated potassium channel, subfamily M, alpha member 1 (KCa1.1), or BK channel alpha subunit, is a voltage gated potassium channel encoded by the KCNMA1 gene and characterized by their large conductance of potassium ions (K+) through cell membranes.
Function
BK channels are activated (opened) by changes in membrane electrical potential and/or by increases in concentration of intracellular calcium ion (Ca2+). Opening of BK channels allows K+ to passively flow through the channel, down the electrochemical gradient. Under typical physiological conditions, this results in an efflux of K+ from the cell, which leads to cell membrane hyperpolarization (a decrease in the electrical potential across the cell membrane) and a decrease in cell excitability (a decrease in the probability that the cell will transmit an action potential).
BK channels are essential for the regulation of several key physiological processes including smooth muscle tone and neuronal excitability. They control the contraction of smooth muscle and are involved with the electrical tuning of hair cells in the cochlea. BK channels also contribute to the behavioral effects of ethanol in the worm C. elegans under high concentrations (> 100 mM, or approximately 0.50% BAC). It remains to be determined if BK channels contribute to intoxication in humans.
Structure
BK channels have a tetrameric structure. Each monomer of the channel-forming alpha subunit is the product of the KCNMA1 gene. Modulatory beta subunits (encoded by KCNMB1, KCNMB2, KCNMB3, or KCNMB4) can associate with the tetrameric channel. Alternatively spliced transcript variants encoding different isoforms have been identified.
Each BK channel alpha subunit consists of (from N- to C-terminal):
A unique transmembrane domain (S0) that precedes the 6 transmembrane domains (S1-S6) conserved in all voltage-dependent K+ channels.
A voltage sensing domain (S1-S4).
A K+ channel pore domain (S5, selectivity filter, and S6).
A cytoplasmic C-terminal domain (CTD) consisting of a pair of RCK domains that assemble into an octameric gating ring on the intracellular side of the tetrameric channel. The CTD contains four primary binding sites for Ca2+, called "calcium bowls", encoded within the second RCK domain of each monomer.
Available X-ray structures include:
– Open structure of the BK channel gating ring
– Crystal structure of the human BK gating apparatus
– Structure of the intracellular gating ring from the human high-conductance Ca2+ gated K+ channel (BK Channel)
Pharmacology
BK channels are pharmacological targets for the treatment of stroke. Various pharmaceutical companies developed synthetic molecules activating these channels in order to prevent excessive neurotoxic calcium entry in neurons. But BMS-204352 (MaxiPost) a molecule developed by Bristol-Myers Squibb failed to improve clinical outcome in stroke patients compared to placebo. BK channels have also been found to be activated by exogenous pollutants and endogenous gasotransmitters carbon monoxide and hydrogen sulphide.
BK channels are blocked by tetraethylammonium (TEA), paxilline and iberiotoxin.
Related conditions
Researchers have identified a rare disease in humans caused by mutations in the gene. KCNMA1-linked channelopathy can cause neurological conditions like seizures and movement disorders. An episode of the Diagnosis TV show, based on a column in the New York Times, was about a young girl with a KCNMA1 disorder that caused transient episodes of muscle weakness.
See also
BK channel
Calcium-activated potassium channel
Voltage-gated potassium channel
References
Further reading
External links
Meredith Lab
Ion channels | Calcium-activated potassium channel subunit alpha-1 | [
"Chemistry"
] | 827 | [
"Neurochemistry",
"Ion channels"
] |
14,724,003 | https://en.wikipedia.org/wiki/Placental%20growth%20factor | Placental growth factor (PlGF) is a protein that in humans is encoded by the PGF gene.
Placental growth factor (PGF) is a member of the VEGF (vascular endothelial growth factor) sub-family - a key molecule in angiogenesis and vasculogenesis, in particular during embryogenesis. The main source of PGF during pregnancy is the placental trophoblast. PGF is also expressed in many other tissues, including the villous trophoblast.
The placental growth factor (PGF) gene is a protein-coding gene and a member of the vascular endothelial growth factor (VEGF) family. The PGF gene is expressed only in human umbilical vein endothelial cells (HUVE) and the placenta. PGF is ultimately associated with angiogenesis. Specifically, PGF plays a role in trophoblast growth and differentiation. Trophoblast cells, specifically extravillous trophoblast cells, are responsible for invading the uterine wall and the maternal spiral arteries. The extravillous trophoblast cells produce a blood vessel of larger diameter for the developing fetus that is independent of maternal vasoconstriction. This is essential for increased blood flow and reduced resistance. Proper development of blood vessels in the placenta is crucial for the higher blood requirement of the fetus later in pregnancy.
Under normal physiologic conditions, PGF is also expressed at a low level in other organs including the heart, lung, thyroid, and skeletal muscle.
Isoform tissue specificity
There are three isoforms of this protein: PGF-1, PGF-2, PGF-3. PGF-1 is specifically found in the colon as well as mammary carcinomas, while PGF-2 is only found in early placenta up until the 8th week of development. PGF-2 is the only isoform able to bind to heparin. PGF-3 is found mainly in placental tissues.
Clinical significance
Placental growth factor-expression within human atherosclerotic lesions is associated with plaque inflammation and neovascular growth.
Serum levels of PGF and sFlt-1 (soluble fms-like tyrosine kinase-1, also known as soluble VEGF receptor-1) are altered in women with preeclampsia. Studies show that in both early and late onset preeclampsia, maternal serum levels of sFlt-1 are higher and PGF lower in women presenting with preeclampsia. In addition, placental sFlt-1 levels were significantly increased and PGF decreased in women with preeclampsia as compared to those with uncomplicated pregnancies. This suggests that placental concentrations of sFlt-1 and PGF mirror the maternal serum changes. This is consistent with the view that the placenta is the main source of sFlt-1 and PGF during pregnancy.1
PGF is a potential biomarker for preeclampsia, a condition in which blood vessels in the placenta are too narrow, resulting in high blood pressure. As mentioned before, extravillous trophoblast cells invade maternal arteries. Improper differentiation may result in hypo-invasion of these arteries and thus failure to widen enough. Studies have found low levels of PGF in women who were diagnosed with preeclampsia later in their pregnancy.
Associated diseases
Placental insufficiency, otherwise known as uteroplacental vascular insufficiency, results from insufficient blood supply to the placenta. This disease is characterized by an alteration in the PGF gene and its GPCR and ERK signaling pathways. Alterations in the PGF and the PGF receptor mRNA expression prevent the normal development of placental vasculature.
Twin-to-twin transfusion syndrome is another disease associated with the PGF gene. This is a rare disease occurring primarily in identical twins where blood from one twin is transferred to the other. Typically, the twin whose blood is being transferred is born smaller and with anemia while the other twin is born larger with too much blood and at increased risk for heart failure. The PGF gene pathways primarily affected are the TGF-Beta pathway and AKT signaling pathway.
References
Further reading
Placenta
Angiology
Growth factors | Placental growth factor | [
"Chemistry"
] | 901 | [
"Growth factors",
"Signal transduction"
] |
17,565,856 | https://en.wikipedia.org/wiki/Parthasarathy%27s%20theorem | In mathematics – and in particular the study of games on the unit square – Parthasarathy's theorem is a generalization of Von Neumann's minimax theorem. It states that a particular class of games has a mixed value, provided that at least one of the players has a strategy that is restricted to absolutely continuous distributions with respect to the Lebesgue measure (in other words, one of the players is forbidden to use a pure strategy).
The theorem is attributed to Thiruvenkatachari Parthasarathy.
Theorem
Let and stand for the unit interval ; denote the set of probability distributions on (with defined similarly); and denote the set of absolutely continuous distributions on (with defined similarly).
Suppose that is bounded on the unit square and that is continuous except possibly on a finite number of curves of the form (with ) where the are continuous functions. For , define
Then
This is equivalent to the statement that the game induced by has a value. Note that one player (WLOG ) is forbidden from using a pure strategy.
Parthasarathy goes on to exhibit a game in which
which thus has no value. There is no contradiction because in this case neither player is restricted to absolutely continuous distributions (and the demonstration that the game has no value requires both players to use pure strategies).
References
T. Parthasarathy 1970. On Games over the unit square, SIAM, volume 19, number 2.
Game theory
Theorems in discrete mathematics
Theorems in measure theory | Parthasarathy's theorem | [
"Mathematics"
] | 303 | [
"Theorems in mathematical analysis",
"Mathematical theorems",
"Theorems in measure theory",
"Discrete mathematics",
"Theorems in discrete mathematics",
"Game theory",
"Mathematical problems"
] |
17,568,082 | https://en.wikipedia.org/wiki/Soil-structure%20interaction | Ground–structure interaction (SSI) consists of the interaction between soil (ground) and a structure built upon it. It is primarily an exchange of mutual stress, whereby the movement of the ground-structure system is influenced by both the type of ground and the type of structure. This is especially applicable to areas of seismic activity. Various combinations of soil and structure can either amplify or diminish movement and subsequent damage. A building on stiff ground rather than deformable ground will tend to suffer greater damage. A second interaction effect, tied to mechanical properties of soil, is the sinking of foundations, worsened by a seismic event. This phenomenon is called soil liquefaction.
Most of the civil engineering structures involve some type of structural element with direct contact with ground. When the external forces, such as earthquakes, act on these systems, neither the structural displacements nor the ground displacements, are independent of each other. The process in which the response of the soil influences the motion of the structure and the motion of the structure influences the response of the soil is termed as soil-structure interaction (SSI).
Conventional structural design methods neglect the SSI effects. Neglecting SSI is reasonable for light structures in relatively stiff soil such as low rise buildings and simple rigid retaining walls. The effect of SSI, however, becomes prominent for heavy structures resting on relatively soft soils for example nuclear power plants, high-rise buildings and elevated-highways on soft soil.
Damage sustained in recent earthquakes, such as the 1995 Kobe earthquake, have also highlighted that the seismic behavior of a structure is highly influenced not only by the response of the superstructure, but also by the response of the foundation and the ground as well. Hence, the modern seismic design codes, such as Standard Specifications for Concrete Structures: Seismic Performance Verification JSCE 2005 stipulate that the response analysis should be conducted by taking into consideration a whole structural system including superstructure, foundation and ground.
Effect of (Soil-structure interaction) SSI and SSI provisions of seismic design codes on structural responses
It is conventionally believed that SSI is a purely beneficial effect, and it can conveniently be neglected for conservative design. SSI provisions of seismic design codes are optional and allow designers to reduce the design base shear of buildings by considering soil-structure interaction (SSI) as a beneficial effect. The main idea behind the provisions is that the soil-structure system can be replaced with an equivalent fixed-base model with a longer period and usually a larger damping ratio. Most of the design codes use oversimplified design spectra, which attain constant acceleration up to a certain period, and thereafter decreases monotonically with period. Considering soil-structure interaction makes a structure more flexible and thus, increasing the natural period of the structure compared to the corresponding rigidly supported structure. Moreover, considering the SSI effect increases the effective damping ratio of the system. The smooth idealization of design spectrum suggests smaller seismic response with the increased natural periods and effective damping ratio due to SSI, which is the main justification of the seismic design codes to reduce the design base shear when the SSI effect is considered. The same idea also forms the basis of the current common seismic design codes such as ASCE 7-10 and ASCE 7-16. Although the mentioned idea, i.e. reduction in the base shear, works well for linear soil-structure systems, it is shown that it cannot appropriately capture the effect of SSI on yielding systems. More recently, Khosravikia et al. evaluated the consequences of practicing the SSI provisions of ASCE 7-10 and those of 2015 National Earthquake Hazards Reduction Program (NEHRP), which form the basis of the 2016 edition of the seismic design standard provided by the ASCE. They showed that SSI provisions of both NEHRP and ASCE 7-10 result in unsafe designs for structures with surface foundation on moderately soft soils, but NEHRP slightly improves upon the current provisions for squat structures. For structures on very soft soils, both provisions yield conservative designs where NEHRP is even more conservative. Finally, both provisions yield near-optimal designs for other systems.
Detrimental effects
Using rigorous numerical analyses, Mylonakis and Gazetas have shown that increase in natural period of structure due to SSI is not always beneficial as suggested by the simplified design spectrums. Soft soil sediments can significantly elongate the period of seismic waves and the increase in natural period of structure may lead to the resonance with the long period ground vibration. Additionally, the study showed that ductility demand can significantly increase with the increase in the natural period of the structure due to SSI effect. The permanent deformation and failure of soil may further aggravate the seismic response of the structure.
When a structure is subjected to an earthquake excitation, it interacts with the foundation and the soil, and thus changes the motion of the ground. Soil-structure interaction broadly can be divided into two phenomena: a) kinematic interaction and b) inertial interaction. Earthquake ground motion causes soil displacement known as free-field motion. However, the foundation embedded into the soil will not follow the free field motion. This inability of the foundation to match the free field motion causes the kinematic interaction. On the other hand, the mass of the superstructure transmits the inertial force to the soil, causing further deformation in the soil, which is termed as inertial interaction.
At low level of ground shaking, kinematic effect is more dominant causing the lengthening of period and increase in radiation damping. However, with the onset of stronger shaking, near-field soil modulus degradation and soil-pile gapping limit radiation damping, and inertial interaction becomes predominant causing excessive displacements and bending strains concentrated near the ground surface resulting in pile damage near the ground level.
Observations from recent earthquakes have shown that the response of the foundation and soil can greatly influence the overall structural response. There are several cases of severe damages in structures due to SSI in the past earthquakes. Yashinsky cites damage in number of pile-supported bridge structures due to SSI effect in the Loma Prieta earthquake in San Francisco in 1989. Extensive numerical analysis carried out by Mylonakis and Gazetas have attributed SSI as one of the reasons behind the dramatic collapse of Hanshin Expressway in 1995 Kobe earthquake.
Design
The main types of foundations, based upon several building characteristics, are:
Isolated plinths (currently not feasible)
Plinths connected by foundations beams
Reverse beams
A plate (used for low-quality grounds)
The filing of foundations grounds takes place according to the mechanical properties of the grounds themselves: in Italy, for instance, according to the new earthquake-proof norm – Ordinanza 3274/2003 – you can identify the following categories:
Category A: homogeneous rock formations
Category B: compact granular or clayey soil
Category C: quite compact granular or clayey soil
Category D: not much compact granular or clayey soil
Category E: alluvial surface layer grounds (very low quality soil)
The type of foundations is selected according to the type of ground; for instance, in the case of homogeneous rock formations connected plinths are selected, while in the case of very low quality grounds plates are chosen.
For further information about the various ways of building foundations see foundation (architecture).
Both grounds and structures can be more or less deformable; their combination can or cannot cause the amplification of the seismic effects on the structure.
Ground, in fact, is a filter with respect to all the main seismic waves, as stiffer soil fosters high-frequency seismic waves while less compact soil accommodates lower frequency waves. Therefore, a stiff building, characterized by a high fundamental frequency, suffers amplified damage when built on stiff ground and then subjected to higher frequencies.
For instance, suppose there are two buildings that share the same high stiffness. They stand on two different soil types: the first, stiff and rocky—the second, sandy and deformable. If subjected to the same seismic event, the building on the stiff ground suffers greater damage.
The second interaction effect, tied to mechanical properties of soil, is about the lowering (sinking) of foundations, worsened by the seismic event itself, especially about less compact grounds. This phenomenon is called soil liquefaction.
Mitigation
The methods most used to mitigate the problem of the ground-structure interaction consist of the employment of the before-seen isolation systems and of some ground brace techniques, which are adopted above all on the low-quality ones (categories D and E).
The most diffused techniques are the jet grouting technique and the pile work technique.
The jet-grouting technique consists of injecting in the subsoil some liquid concrete by means of a drill. When this concrete hardens it forms a sort of column that consolidates the surrounding soil. This process is repeated on all areas of the structure.
The pile work technique consists of using piles, which, once inserted in the ground, support the foundation and the building above, by moving the loads or the weights towards soil layers that are deeper and therefore more compact and movement-resistant.
References
External links
Do you like to better understand what happens when seismic waves get through the ground-structure system?
Seismic soil-structure interaction
Structural engineering
Earthquake and seismic risk mitigation
Earthquake engineering
Foundations (buildings and structures)
Soil mechanics
Structural analysis
Geotechnical structures | Soil-structure interaction | [
"Physics",
"Engineering"
] | 1,925 | [
"Structural engineering",
"Applied and interdisciplinary physics",
"Structural analysis",
"Soil mechanics",
"Foundations (buildings and structures)",
"Construction",
"Civil engineering",
"Mechanical engineering",
"Aerospace engineering",
"Earthquake engineering",
"Earthquake and seismic risk mit... |
17,571,895 | https://en.wikipedia.org/wiki/Aerobiological%20engineering | Aerobiological engineering is the science of designing buildings and systems to control airborne pathogens and allergens in indoor environments. The most-common environments include commercial buildings, residences and hospitals. This field of study is important because controlled indoor climates generally tend to favor the survival and transmission of contagious human pathogens as well as certain kinds of fungi and bacteria.
Aerobiological engineering in healthcare facilities
Since healthcare facilities can house a number of different types of patients who potentially have weakened immune systems, aerobiological engineering is of significant importance to engineers of hospitals. The aerobiology that concerns designers of hospitals includes viruses, bacteria, fungi, and other microbiological products such as endotoxins, mycotoxins, and microbial volatile organic compounds (MVOC's). Bacteria and viruses, because of their small size, readily become airborne as bacterial aerosols . Even large-sized droplets can remain suspended in the air for long periods if upward velocity of air in closed spaces exceed particle's downward velocity as dictated by their negligible mass. Because of this, adequate precautions and mitigation techniques need to be taken with indoor air quality in hospitals dealing with infectious diseases.
Ventilation systems
At a minimum, ventilation systems provide dilution and removal of airborne contaminants, which in general leads to improved indoor air quality and happier occupants. If filters are checked and replaced as needed, they can form an integral component of an immune building system designed to prevent the spread of diseases by airborne routes. They can also be used for pressurization of areas within buildings to provide contamination control.
Biocontamination in ventilation systems
Ventilation systems can contribute to the microbial loading of indoor environment by drawing in microbes from outdoor air and by creating conditions for growth. When microbes land on a wet filter that has been collecting dust, they have the perfect medium on which to grow, and if they grow through the filter they have the potential to be aerosolized and carried throughout the building via the HVAC control system.
Dilution rates
Bacteria in hospitals can be aerosolized when sick patients cough and sneeze and because of the large number of germs produced it is necessary that the number of air changes per hour (ACH) remain high in treatment and operating rooms. The American Society of Heating, Refrigerating and Air-Conditioning Engineers typically recommends 12-25 ACH in treatment and operating rooms and 4-6 ACH in intensive-care rooms. For rooms containing tuberculosis patients, the Centers for Disease Control and Prevention recommends an ACH of 6 to 12, with exhaust air being sent through high-efficiency-particulate-air (HEPA) filters before being sent outside.
Pressurized isolation rooms
In order to keep patients safe, hospitals use a range of technologies to combat airborne pathogens. Isolation rooms can be designed to feature positive or negative air-pressure flows. Positive-pressure rooms are used when there are patients who are extremely susceptible to disease, such as HIV patients. For these patients, it is paramount to prevent the ingress of any microorganisms, including common fungi and bacteria that may be harmless to healthy people. These systems filter the air before delivery with a HEPA filter and then pump it into the isolation room at high pressure, which forces air from the isolation room out into the hallway. In a negative-pressure system, the focus is on keeping infectious diseases isolated by controlling the airflow and directing harmful aerosols away from health care workers and other occupied areas. Negative pressure isolation rooms keep contaminants and pathogens from reaching external areas. The most common application of these rooms in the health industry today is for isolating tuberculosis patients. To do this, the air is exhausted from the room at a rate greater than that at which it is being delivered. This makes it difficult for airborne disease to go from a contaminated area to a hospital hallway, because air is constantly being drawn into the room rather than escaping from it.
Air sterilization processes
The normal means for filtration in healthcare facilities is low-efficiency air filters outside the air-handling unit followed by the HEPA (High Efficiency Particulate Air) filters placed after the air-handling unit. To be HEPA-certified, filters must remove particles of 0.3 μm diameter, with at least a 99.97-percent efficiency. Air burners sterilize air that is leaving contaminated isolation rooms by heating it to for six seconds. Ultraviolet germicidal irradiation (UVGI) is another technique for special-purpose air sterilization. It is defined as electromagnetic radiation in the range of about 200 to 320 nm, that is used to destroy microorganisms. When HEPA filters are used in conjunction with UV sterilization tools, the results can be extremely effective. The filter will remove the bigger, hardier spores, and all that is left are the smaller microbes which are killed more efficiently by the high-intensity UV treatment.
See also
Human habitat
Human outpost (artificially created controlled human habitat)
Legionnaires Disease
Aerobiology
References
C.S. Cox The Aerobiological Pathway of Microorganisms. Chichester G.B.: John Wiley & Sons 27, p. 118-119.
Godish, Thad. Indoor Environmental Quality. Boca Raton, FL, USA: Lewis Publishers, 2001. p. 190.
Kowalski, Wladyslaw Jan. Aerobiological Engineering Handbook. Blacklick, OH, USA: McGraw-Hill Professional Publishing, 2005. p. 6, 185, 231, 260, 528, 530.
Biological engineering
Ventilation
Human habitats | Aerobiological engineering | [
"Engineering",
"Biology"
] | 1,141 | [
"Biological engineering"
] |
17,571,980 | https://en.wikipedia.org/wiki/Pharmacopoeia%20of%20the%20People%27s%20Republic%20of%20China | The Pharmacopoeia of the People's Republic of China (PPRC) or the Chinese Pharmacopoeia (ChP), compiled by the Pharmacopoeia Commission of the Ministry of Health of the People's Republic of China, is an official compendium of drugs, covering Traditional Chinese and western medicines, which includes information on the standards of purity, description, test, dosage, precaution, storage, and the strength for each drug.
It is recognized by the World Health Organization as the official pharmacopoeia of China.
Content
The ChP, as of its tenth (2015) edition, comes in 4 volumes for both the Chinese and the English versions:
Traditional Chinese Medicine,
Chemical Medicine,
Biological Preparations,
General rules and common inactive ingredients, ; new volume
The English version is collectively coded as . The 2015 ChP requires Good Manufacturing Practices for all ChP-compliant medications and in general uses INN for English names. The Chinese version arranges medicines in ascending stroke order, while the English translations do so in alphabetical order.
History
The 1997 English version consists of two volumes:
Volume 1 (Herbal medicine), 1997,
Volume 2 (Western medicine), 1997,
The 1997 Chinese version (in simplified Chinese) also consists of two volumes, but the English and Chinese versions are not direct translations of each other, as they are sorted differently as is in the current edition.
A third volume was added in the 2005 version. The English edition () describes itself as a "compendium of almost all traditional Chinese medicines and most western medicines and preparations. Information is given for each drug on standards of purity, description, test, dosage, precaution, storage and strength. Key features: A total of 2691 monographs: 992 for traditional Chinese medicines and 1699 for modern western drugs.
"Volume I contains monographs of Chinese material medica and pared slice, vegetable oil/fat and its extract, Chinese traditional patent medicines, single ingredient of Chinese crude drug preparations etc.;
Volume II deals with monographs of chemical drugs, antibiotics, biochemical preparations, Radiopharmaceuticals and excipients for pharmaceutical use;
Volume III contains biological products."
See also
British Pharmacopoeia
Chinese herbology
Chinese Medical Herbology and Pharmacology
European Pharmacopoeia
Pharmacopoeia
The International Pharmacopoeia
United States Pharmacopeia
References
External links
The Pharmacopoeia of the People's Republic of China, 2020 Version.
Pharmacopoeias | Pharmacopoeia of the People's Republic of China | [
"Chemistry"
] | 526 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
17,573,081 | https://en.wikipedia.org/wiki/Modal%20matrix | In linear algebra, the modal matrix is used in the diagonalization process involving eigenvalues and eigenvectors.
Specifically the modal matrix for the matrix is the n × n matrix formed with the eigenvectors of as columns in . It is utilized in the similarity transformation
where is an n × n diagonal matrix with the eigenvalues of on the main diagonal of and zeros elsewhere. The matrix is called the spectral matrix for . The eigenvalues must appear left to right, top to bottom in the same order as their corresponding eigenvectors are arranged left to right in .
Example
The matrix
has eigenvalues and corresponding eigenvectors
A diagonal matrix , similar to is
One possible choice for an invertible matrix such that is
Note that since eigenvectors themselves are not unique, and since the columns of both and may be interchanged, it follows that both and are not unique.
Generalized modal matrix
Let be an n × n matrix. A generalized modal matrix for is an n × n matrix whose columns, considered as vectors, form a canonical basis for and appear in according to the following rules:
All Jordan chains consisting of one vector (that is, one vector in length) appear in the first columns of .
All vectors of one chain appear together in adjacent columns of .
Each chain appears in in order of increasing rank (that is, the generalized eigenvector of rank 1 appears before the generalized eigenvector of rank 2 of the same chain, which appears before the generalized eigenvector of rank 3 of the same chain, etc.).
One can show that
where is a matrix in Jordan normal form. By premultiplying by , we obtain
Note that when computing these matrices, equation () is the easiest of the two equations to verify, since it does not require inverting a matrix.
Example
This example illustrates a generalized modal matrix with four Jordan chains. Unfortunately, it is a little difficult to construct an interesting example of low order.
The matrix
has a single eigenvalue with algebraic multiplicity . A canonical basis for will consist of one linearly independent generalized eigenvector of rank 3 (generalized eigenvector rank; see generalized eigenvector), two of rank 2 and four of rank 1; or equivalently, one chain of three vectors , one chain of two vectors , and two chains of one vector , .
An "almost diagonal" matrix in Jordan normal form, similar to is obtained as follows:
where is a generalized modal matrix for , the columns of are a canonical basis for , and . Note that since generalized eigenvectors themselves are not unique, and since some of the columns of both and may be interchanged, it follows that both and are not unique.
Notes
References
Matrices | Modal matrix | [
"Mathematics"
] | 576 | [
"Matrices (mathematics)",
"Mathematical objects"
] |
3,113,292 | https://en.wikipedia.org/wiki/High-level%20waste | High-level waste (HLW) is a type of nuclear waste created by the reprocessing of spent nuclear fuel. It exists in two main forms:
First and second cycle raffinate and other waste streams created by nuclear reprocessing.
Waste formed by vitrification of liquid high-level waste.
Liquid high-level waste is typically held temporarily in underground tanks pending vitrification. Most of the high-level waste created by the Manhattan Project and the weapons programs of the Cold War exists in this form because funding for further processing was typically not part of the original weapons programs. Both spent nuclear fuel and vitrified waste are considered as suitable forms for long term disposal, after a period of temporary storage in the case of spent nuclear fuel.
HLW contains many of the fission products and transuranic elements generated in the reactor core and is the type of nuclear waste with the highest activity. HLW accounts for over 95% of the total radioactivity produced in the nuclear power process. In other words, while most nuclear waste is low-level and intermediate-level waste, such as protective clothing and equipment that have been contaminated with radiation, the majority of the radioactivity produced from the nuclear power generation process comes from high-level waste.
Some countries, particularly France, reprocess commercial spent fuel.
High-level waste is very radioactive and, therefore, requires special shielding during handling and transport. Initially it also needs cooling, because it generates a great deal of heat. Most of the heat, at least after short-lived nuclides have decayed, is from the medium-lived fission products caesium-137 and strontium-90, which have half-lives on the order of 30 years.
A typical large 1000 MWe nuclear reactor produces 25–30 tons of spent fuel per year. If the fuel were reprocessed and vitrified, the waste volume would be only about three cubic meters per year, but the decay heat would be almost the same.
It is generally accepted that the final waste will be disposed of in a deep geological repository, and many countries have developed plans for such a site, including Finland, France, Japan, United States and Sweden.
Definitions
High-level waste is the highly radioactive waste material resulting from the reprocessing of spent nuclear fuel, including liquid waste produced directly in reprocessing and any solid material derived from such liquid waste that contains fission products in sufficient concentrations; and other highly radioactive material that is determined, consistent with existing law, to require permanent isolation.
Spent (used) reactor fuel.
Spent nuclear fuel is used reactor fuel that is no longer efficient in creating electricity, because its fission process has slowed due to a build-up of reaction poisons. However, it is still thermally hot, highly radioactive, and potentially harmful.
Waste materials from reprocessing.
Materials for nuclear weapons are acquired by reprocessing spent nuclear fuel from breeder reactors. Reprocessing is a method of chemically treating spent fuel to separate out uranium and plutonium. The byproduct of reprocessing is a highly radioactive sludge residue.
Storage
High-level radioactive waste is stored for 10 or 20 years in spent fuel pools, and then can be put in dry cask storage facilities.
In 1997, in the 20 countries which account for most of the world's nuclear power generation, spent fuel storage capacity at the reactors was 148,000 tonnes, with 59% of this utilized. Away-from-reactor storage capacity was 78,000 tonnes, with 44% utilized.
See also
Radioactive waste
Low-level waste
Transuranic waste
Mixed waste
Into Eternity (film)
Notes
References
Fentiman, Audeen W. and James H. Saling. Radioactive Waste Management. New York: Taylor & Francis, 2002. Second ed.
Large, John H. Risks and Hazards arising the Transportation of Irradiated Fuel and Nuclear Materials in the United Kingdom R3144-A1, March 2006
External links
NRC Backgrounder on Radioactive Waste
Radioactive waste | High-level waste | [
"Chemistry",
"Technology"
] | 829 | [
"Environmental impact of nuclear power",
"Radioactive waste",
"Hazardous waste",
"Radioactivity"
] |
3,113,450 | https://en.wikipedia.org/wiki/Candle%20clock | A candle clock is a thin candle with consistently spaced marking that, when burned, indicates the passage of periods of time. While no longer used today, candle clocks provided an effective way to tell time indoors, at night, or on a cloudy day.
History
It is unknown where and when candle clocks were first used. The earliest reference to their use to
occurs in a Chinese poem by You Jiangu (AD 520). Here, the graduated candle supplied a means of determining time at night. Similar candles were used in Japan until the early 10th century.
You Jiangu's device consisted of six candles made from 72 pennyweights (24 grains each), of wax, each being 12 inches high, of uniform thickness, and divided into 12 sections each of one inch. Each candle burned away completely in four hours, making each marking 20 minutes. The candles were placed for protection inside cases made of a wooden frame with transparent horn panels in the sides.
Similar methods of measuring time were used in medieval churches. The invention of the candle clock was attributed by the Anglo-Saxons to Alfred the Great, king of Wessex. The story of how the clock was created was narrated by Asser, who lived at Alfred's court and became his close associate. Alfred used six candles, each made from 12 pennyweights of wax, and made to be high and of a uniform thickness. The candles were marked at intervals of an inch. Once lit, they were protected from the wind by being placed in a lantern made of wood and transparent horn. It would have taken 20 minutes to burn down to the next mark; the candles, burning one after the other, lasted for 24 hours.
Al-Jazari
Al-Jazari described a candle clock in 1206. It included a dial to display the time and, for the first time, employed a bayonet fitting, a fastening mechanism still used in modern times. The English engineer and historian Donald Routledge Hill described one of al-Jazari's candle clocks as follows:
References
Sources
Turner, Anthony J. The Time Museum, Volume I, Time Measuring Instruments; Part 3, Water-clocks, Sand-glasses, Fire-clocks
Candles
Chinese inventions
Clocks
Japanese inventions
ru:Огненные часы#Свечные часы | Candle clock | [
"Physics",
"Technology",
"Engineering"
] | 476 | [
"Physical systems",
"Machines",
"Clocks",
"Measuring instruments"
] |
3,113,551 | https://en.wikipedia.org/wiki/Carbon%E2%80%93hydrogen%20bond | In chemistry, the carbon–hydrogen bond ( bond) is a chemical bond between carbon and hydrogen atoms that can be found in many organic compounds. This bond is a covalent, single bond, meaning that carbon shares its outer valence electrons with up to four hydrogens. This completes both of their outer shells, making them stable.
Carbon–hydrogen bonds have a bond length of about 1.09 Å (1.09 × 10−10 m) and a bond energy of about 413 kJ/mol (see table below). Using Pauling's scale—C (2.55) and H (2.2)—the electronegativity difference between these two atoms is 0.35. Because of this small difference in electronegativities, the bond is generally regarded as being non-polar. In structural formulas of molecules, the hydrogen atoms are often omitted. Compound classes consisting solely of bonds and bonds are alkanes, alkenes, alkynes, and aromatic hydrocarbons. Collectively they are known as hydrocarbons.
In October 2016, astronomers reported that the very basic chemical ingredients of life—the carbon–hydrogen molecule (CH, or methylidyne radical), the carbon–hydrogen positive ion () and the carbon ion ()—are created, in large part, using energy from the ultraviolet light of nearby stars, rather than in other ways, such as turbulent events related to supernovae and young stars, as thought earlier.
Bond length
The length of the carbonhydrogen bond varies slightly with the hybridisation of the carbon atom. A bond between a hydrogen atom and an sp2 hybridised carbon atom is about 0.6% shorter than between hydrogen and sp3 hybridised carbon. A bond between hydrogen and sp hybridised carbon is shorter still, about 3% shorter than sp3 C-H. This trend is illustrated by the molecular geometry of ethane, ethylene and acetylene.
Reactions
The C−H bond in general is very strong, so it is relatively unreactive. In several compound classes, collectively called carbon acids, the C−H bond can be sufficiently acidic for proton removal. Unactivated C−H bonds are found in alkanes and are not adjacent to a heteroatom (O, N, Si, etc.). Such bonds usually only participate in radical substitution. Many enzymes are known, however, to effect these reactions.
Although the C−H bond is one of the strongest, it varies over 30% in magnitude for fairly stable organic compounds, even in the absence of heteroatoms.
See also
Carbon–carbon bond
Carbon–nitrogen bond
Carbon–oxygen bond
Carbon–fluorine bond
References
Organic chemistry
Chemical bonding | Carbon–hydrogen bond | [
"Physics",
"Chemistry",
"Materials_science"
] | 559 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
3,113,736 | https://en.wikipedia.org/wiki/Plate%20heat%20exchanger | A plate heat exchanger is a type of heat exchanger that uses metal plates to transfer heat between two fluids. This has a major advantage over a conventional heat exchanger in that the fluids are exposed to a much larger surface area because the fluids are spread out over the plates. This facilitates the transfer of heat, and greatly increases the speed of the temperature change. Plate heat exchangers are now common and very small brazed versions are used in the hot-water sections of millions of combination boilers. The high heat transfer efficiency for such a small physical size has increased the domestic hot water (DHW) flowrate of combination boilers. The small plate heat exchanger has made a great impact in domestic heating and hot-water. Larger commercial versions use gaskets between the plates, whereas smaller versions tend to be brazed.
The concept behind a heat exchanger is the use of pipes or other containment vessels to heat or cool one fluid by transferring heat between it and another fluid. In most cases, the exchanger consists of a coiled pipe containing one fluid that passes through a chamber containing another fluid. The walls of the pipe are usually made of metal, or another substance with a high thermal conductivity, to facilitate the interchange, whereas the outer casing of the larger chamber is made of a plastic or coated with thermal insulation, to discourage heat from escaping from the exchanger.
The world's first commercially viable plate heat exchanger (PHE) was invented by Dr Richard Seligman in 1923 and revolutionized methods of indirect heating and cooling of fluids. Dr Richard Seligman founded APV in 1910 as the Aluminum Plant & Vessel Company Limited, a specialist fabricating firm supplying welded vessels to the brewery and vegetable oil trades. Also, it set the norm for today's computer-designed thin metal plate Heat Exchangers that are used all over the world.
Design of plate and frame heat exchangers
The plate heat exchanger (PHE) is a specialized design well suited to transferring heat between medium- and low-pressure fluids. Welded, semi-welded and brazed heat exchangers are used for heat exchange between high-pressure fluids or where a more compact product is required. In place of a pipe passing through a chamber, there are instead two alternating chambers, usually thin in depth, separated at their largest surface by a corrugated metal plate. The plates used in a plate and frame heat exchanger are obtained by one piece pressing of metal plates. Stainless steel is a commonly used metal for the plates because of its ability to withstand high temperatures, its strength, and its corrosion resistance.
The plates are often spaced by rubber sealing gaskets which are cemented into a section around the edge of the plates. The plates are pressed to form troughs at right angles to the direction of flow of the liquid which runs through the channels in the heat exchanger. These troughs are arranged so that they interlink with the other plates which forms the channel with gaps of 1.3–1.5 mm between the plates. The plates are compressed together in a rigid frame to form an arrangement of parallel flow channels with alternating hot and cold fluids. The plates produce an extremely large surface area, which allows for the fastest possible transfer. Making each chamber thin ensures that the majority of the volume of the liquid contacts the plate, again aiding exchange. The troughs also create and maintain a turbulent flow in the liquid to maximize heat transfer in the exchanger. A high degree of turbulence can be obtained at low flow rates and high heat transfer coefficient can then be achieved.
As compared to shell and tube heat exchangers, the temperature approach (the smallest difference between the temperatures of the cold and hot streams) in a plate heat exchangers may be as low as 1 °C whereas shell and tube heat exchangers require an approach of 5 °C or more. For the same amount of heat exchanged, the size of the plate heat exchanger is smaller, because of the large heat transfer area afforded by the plates (the large area through which heat can travel). Increase and reduction of the heat transfer area is simple in a plate heat-exchanger, through the addition or removal of plates from the stack.
Evaluating plate heat exchangers
All plate heat exchangers look similar on the outside. The difference lies on the inside, in the details of the plate design and the sealing technologies used. Hence, when evaluating a plate heat exchanger, it is very important not only to explore the details of the product being supplied but also to analyze the level of research and development carried out by the manufacturer and the post-commissioning service and spare parts availability.
An important aspect to take into account when evaluating a heat exchanger are the forms of corrugation within the heat exchanger. There are two types: intermating and chevron corrugations. In general, greater heat transfer enhancement is produced from chevrons for a given increase in pressure drop and are more commonly used than intermating corrugations.
There are so many different ways of modifications to increase heat exchangers efficiency that it is extremely doubtful that any of them will be supported by a commercial simulator. In addition, some proprietary data can never be released from the heat transfer enhancement manufacturers. However, it does not mean that any of the pre-measurements for emerging technology are not accomplish by the engineers. Context information on several different forms of changes to heat exchangers is given below. The main objective of having a cost benefit heat exchanger compared to the usage of a traditional heat exchanger must always be fulfilled by heat exchanger enhancement. Fouling capacity, reliability and safety are other considerations that should be tackled.
First is Periodic Cleaning. Periodic cleaning (on-site cleaning) is the most efficient method to flush out all the waste and dirt that over time decreases the efficiency of the heat exchanger. This approach requires both sides of the PHE (Plate Heat Exchanger) to be drained, followed by its isolation from the fluid in the system. From both sides, water should be flushed out until it runs completely clear. The flushing should be carried out in the opposite direction to regular operations for the best results. Once it is done, it is then time to use a circular pump and a solution tank to pass on a cleaning agent while ensuring that the agent is compatible with the PHE (Plate Heat Exchanger) gaskets and plates. Lastly, until the discharge stream runs clear, the system should be flushed with water again.
Optimization of plate heat exchangers
To achieve improvement in PHE's, two important factors namely amount of heat transfer and pressure drop have to be considered such that amount of heat transfer needs to be increased and pressure drops need to be decreased. In plate heat exchangers due to presence of corrugated plate, there is a significant resistance to flow with high friction loss. Thus to design plate heat exchangers, one should consider both factors.
For various range of Reynolds numbers, many correlations and chevron angles for plate heat exchangers exist. The plate geometry is one of the most important factor in heat transfer and pressure drop in plate heat exchangers, however such a feature is not accurately prescribed. In the corrugated plate heat exchangers, because of narrow path between the plates, there is a large pressure capacity and the flow becomes turbulent along the path. Therefore, it requires more pumping power than the other types of heat exchangers. Therefore, higher heat transfer and less pressure drop are targeted. The shape of plate heat exchanger is very important for industrial applications that are affected by pressure drop.
Flow distribution and heat transfer equation
Design calculations of a plate heat exchanger include flow distribution and pressure drop and heat transfer. The former is an issue of Flow distribution in manifolds. A layout configuration of plate heat exchanger can
be usually simplified into a manifold system with two manifold
headers for dividing and combining fluids, which can be
categorized into U-type and Z-type arrangement according to
flow direction in the headers, as shown in manifold arrangement. Bassiouny and Martin developed the previous theory of design. In recent years Wang unified all the main existing models and developed a most completed theory and design tool.
The total rate of heat transfer between the hot and cold fluids passing through a plate heat exchanger may be expressed as:
Q = UA∆Tm
where U is the Overall heat transfer coefficient, A is the total plate area, and ∆Tm is the Log mean temperature difference. U is dependent upon the heat transfer coefficients in the hot and cold streams.
Their cleaning helps to avoid fouling and scaling without the heat exchanger needing to be shut down or operations disrupted. In order to avoid heat exchanger performance to decrease and service life of the tube extension, the OnC (Online Cleaning) can be used as a standalone approach or in conjunction with chemical treatment. The re-circulating ball type system and the brush and basket system are some of OnC techniques. OfC (Offline Cleaning) is another effective cleaning method that effectively increases the performance of heat exchangers and decreases operating expenses. This method, also known as pigging, uses a shape like bullet device that is inserted in each tube and using high air pressure to force down the tube. Chemical washing, hydro-blasting and hydro-lancing are other widely used methods other than OfC. Both these techniques, when used frequently, will restore the exchanger into its optimum efficiency until the fouling and scaling begin to slip slowly and adversely affecting the efficiency of the heat exchanger.
Operation and maintenance cost is necessary for a heat exchanger. But there are different ways to minimize the cost. Firstly, cost can be minimized by reducing fouling formation on heat exchanger that decreases the overall heat transfer coefficient. According to analysis estimated, effect of fouling formation will generate a huge cost of operational losses which more than 4 billion dollars. The total fouling cost including capital cost, energy cost, maintenance cost and cost of profit loss. Chemical fouling inhibitors is one of the fouling control method. For example, acrylic acid/hydroxypropyl acrylate (AA/HPA) and acrylic acid/sulfonic acid (AA/SA) copolymers can be used to inhibit the fouling by deposition of calcium phosphate. Next, deposition of fouling can also be reduced by installing the heat exchanger vertically as the gravitational force pulls any of the particles away from the heat transfer surface in the heat exchanger.
Second, operation cost can be minimized when saturated steam is used compared to superheated steam as a fluid. Superheated steam acts as an insulator and poor heat conductor, it is not suitable for heat application such as heat exchanger.
See also
Plate fin heat exchanger
Heat transfer
LMTD
References
Bibliography
External links
A list of published articles pertaining to plate heat exchangers
A screening method for the optimal selection of plate heat exchanger configurations by J.M.Pinto and J.A.W.Gut, University of São Paulo, Brazil.
Seeking the optimal design of a typical plate heat exchanger (PHE) by Athanasios G. Kanaris, Aikaterini A. Mouza and Spiros V. Paras, Aristotle University of Thessaloniki.
Heat exchangers | Plate heat exchanger | [
"Chemistry",
"Engineering"
] | 2,280 | [
"Chemical equipment",
"Heat exchangers"
] |
3,113,840 | https://en.wikipedia.org/wiki/Traction%20%28mechanics%29 | Traction, traction force or tractive force is a force used to generate motion between a body and a tangential surface, through the use of either dry friction or shear force.
It has important applications in vehicles, as in tractive effort.
Traction can also refer to the maximum tractive force between a body and a surface, as limited by available friction; when this is the case, traction is often expressed as the ratio of the maximum tractive force to the normal force and is termed the coefficient of traction (similar to coefficient of friction). It is the force which makes an object move over the surface by overcoming all the resisting forces like friction, normal loads (load acting on the tiers in negative Z axis), air resistance, rolling resistance, etc.
Definitions
Traction can be defined as:
In vehicle dynamics, tractive force is closely related to the terms tractive effort and drawbar pull, though all three terms have different definitions.
Coefficient of traction
The coefficient of traction is defined as the usable force for traction divided by the weight on the running gear (wheels, tracks etc.) i.e.:
usable traction = coefficient of traction × normal force.
Factors affecting coefficient of traction
Traction between two surfaces depends on several factors:
Material composition of each surface.
Macroscopic and microscopic shape (texture; macrotexture and microtexture)
Normal force pressing contact surfaces together.
Contaminants at the material boundary including lubricants and adhesives.
Relative motion of tractive surfaces - a sliding object (one in kinetic friction) has less traction than a non-sliding object (one in static friction).
Direction of traction relative to some coordinate system - e.g., the available traction of a tire often differs between cornering, accelerating, and braking.
For low-friction surfaces, such as off-road or ice, traction can be increased by using traction devices that partially penetrate the surface; these devices use the shear strength of the underlying surface rather than relying solely on dry friction (e.g., aggressive off-road tread or snow chains)....
Traction coefficient in engineering design
In the design of wheeled or tracked vehicles, high traction between wheel and ground is more desirable than low traction, as it allows for higher acceleration (including cornering and braking) without wheel slippage. One notable exception is in the motorsport technique of drifting, in which rear-wheel traction is purposely lost during high speed cornering.
Other designs dramatically increase surface area to provide more traction than wheels can, for example in continuous track and half-track vehicles. A tank or similar tracked vehicle uses tracks to reduce the pressure on the areas of contact. A 70-ton M1A2 would sink to the point of high centering if it used round tires. The tracks spread the 70 tons over a much larger area of contact than tires would and allow the tank to travel over much softer land.
In some applications, there is a complicated set of trade-offs in choosing materials. For example, soft rubbers often provide better traction but also wear faster and have higher losses when flexed—thus reducing efficiency. Choices in material selection may have a dramatic effect. For example: tires used for track racing cars may have a life of 200 km, while those used on heavy trucks may have a life approaching 100,000 km. The truck tires have less traction and also thicker rubber.
Traction also varies with contaminants. A layer of water in the contact patch can cause a substantial loss of traction. This is one reason for grooves and siping of automotive tires.
The traction of trucks, agricultural tractors, wheeled military vehicles, etc. when driving on soft and/or slippery ground has been found to improve significantly by use of Tire Pressure Control Systems (TPCS). A TPCS makes it possible to reduce and later restore the tire pressure during continuous vehicle operation. Increasing traction by use of a TPCS also reduces tire wear and ride vibration.
See also
Anti-lock braking system
Equilibrium tide
Friction
Force (physics)
Karl A. Grosch
Rail adhesion
Road slipperiness
Sandbox (locomotive)
Tribology
Weight transfer
References
Force
Vehicle technology
Mechanics | Traction (mechanics) | [
"Physics",
"Mathematics",
"Engineering"
] | 842 | [
"Force",
"Physical quantities",
"Quantity",
"Mass",
"Classical mechanics",
"Vehicle technology",
"Mechanics",
"Mechanical engineering by discipline",
"Mechanical engineering",
"Wikipedia categories named after physical quantities",
"Matter"
] |
3,114,625 | https://en.wikipedia.org/wiki/1%2C4-Benzoquinone | 1,4-Benzoquinone, commonly known as para-quinone, is a chemical compound with the formula C6H4O2. In a pure state, it forms bright-yellow crystals with a characteristic irritating odor, resembling that of chlorine, bleach, and hot plastic or formaldehyde. This six-membered ring compound is the oxidized derivative of 1,4-hydroquinone. The molecule is multifunctional: it exhibits properties of a ketone, being able to form oximes; an oxidant, forming the dihydroxy derivative; and an alkene, undergoing addition reactions, especially those typical for α,β-unsaturated ketones. 1,4-Benzoquinone is sensitive toward both strong mineral acids and alkali, which cause condensation and decomposition of the compound.
Preparation
1,4-Benzoquinone is prepared industrially by oxidation of hydroquinone, which can be obtained by several routes. One route involves oxidation of diisopropylbenzene and the Hock rearrangement. The net reaction can be represented as follows:
C6H4(CHMe2)2 + 3 O2 → C6H4O2 + 2 OCMe2 + H2O
The reaction proceeds via the bis(hydroperoxide) and the hydroquinone. Acetone is a coproduct.
Another major process involves the direct hydroxylation of phenol by acidic hydrogen peroxide:
C6H5OH + H2O2 → C6H4(OH)2 + H2O
Both hydroquinone and catechol are produced. Subsequent oxidation of the hydroquinone gives the quinone.
Quinone was originally prepared industrially by oxidation of aniline, for example by manganese dioxide. This method is mainly practiced in PRC where environmental regulations are more relaxed.
Oxidation of hydroquinone is facile. One such method makes use of hydrogen peroxide as the oxidizer and iodine or an iodine salt as a catalyst for the oxidation occurring in a polar solvent; e.g. isopropyl alcohol.
When heated to near its melting point, 1,4-benzoquinone sublimes, even at atmospheric pressure, allowing for an effective purification. Impure samples are often dark-colored due to the presence of quinhydrone, a dark green 1:1 charge-transfer complex of quinone with hydroquinone.
Structure and redox
Benzoquinone is a planar molecule with localized, alternating C=C, C=O, and C–C bonds. Reduction gives the semiquinone anion C6H4O2−}, which adopts a more delocalized structure. Further reduction coupled to protonation gives the hydroquinone, wherein the C6 ring is fully delocalized.
Reactions and applications
Quinone is mainly used as a precursor to hydroquinone, which is used in photography and rubber manufacture as a reducing agent and antioxidant. Benzoquinonium is a skeletal muscle relaxant, ganglion blocking agent that is made from benzoquinone.
Organic synthesis
It is used as a hydrogen acceptor and oxidant in organic synthesis. 1,4-Benzoquinone serves as a dehydrogenation reagent. It is also used as a dienophile in Diels Alder reactions.
Benzoquinone reacts with acetic anhydride and sulfuric acid to give the triacetate of hydroxyquinol. This reaction is called the Thiele reaction or Thiele–Winter reaction after Johannes Thiele, who first described it in 1898, and after Ernst Winter, who further described its reaction mechanism in 1900. An application is found in this step of the total synthesis of Metachromin A:
Benzoquinone is also used to suppress double-bond migration during olefin metathesis reactions.
An acidic potassium iodide solution reduces a solution of benzoquinone to hydroquinone, which can be reoxidized back to the quinone with a solution of silver nitrate.
Due to its ability to function as an oxidizer, 1,4-benzoquinone can be found in methods using the Wacker-Tsuji oxidation, wherein a palladium salt catalyzes the conversion of an alkene to a ketone. This reaction is typically carried out using pressurized oxygen as the oxidizer, but benzoquinone can sometimes preferred. It is also used as a reagent in some variants on Wacker oxidations.
1,4-Benzoquinone is used in the synthesis of Bromadol and related analogs.
Related 1,4-benzoquinones
2,3-Dichloro-5,6-dicyano-1,4-benzoquinone (DDQ) is a stronger oxidant and dehydrogenation agent than 1,4-benzoquinone. Chloranil 1,4-C6Cl4O2 is another potent oxidant and dehydrogenation agent. Monochloro-p-benzoquinone is yet another but milder oxidant.
Metabolism
1,4-Benzoquinone is a toxic metabolite found in human blood and can be used to track exposure to benzene or mixtures containing benzene and benzene compounds, such as petrol. The compound can interfere with cellular respiration, and kidney damage has been found in animals receiving severe exposure. It is excreted in its original form and also as variations of its own metabolite, hydroquinone.
Safety
1,4-Benzoquinone is able to stain skin dark brown, cause erythema (redness, rashes on skin) and lead on to localized tissue necrosis. It is particularly irritating to the eyes and respiratory system. Its ability to sublime at commonly encountered temperatures allows for a greater airborne exposure risk than might be expected for a room-temperature solid. IARC has found insufficient evidence to comment on the compound's carcinogenicity, but has noted that it can easily pass into the bloodstream and that it showed activity in depressing bone marrow production in mice and can inhibit protease enzymes involved in cellular apoptosis.
See also
Tetrahydroxybenzoquinone
Benzoquinonetetracarboxylic acid
1,2-Benzoquinone
Quinones
Duroquinone
Ardisiaquinone
References
Oxidizing agents
Enones
ja:ベンゾキノン#1,4-ベンゾキノン | 1,4-Benzoquinone | [
"Chemistry"
] | 1,377 | [
"Redox",
"Oxidizing agents"
] |
3,115,142 | https://en.wikipedia.org/wiki/Council%20for%20Responsible%20Genetics | The Council for Responsible Genetics (CRG) was a nonprofit NGO with a focus on biotechnology.
History
The Council for Responsible Genetics was founded in 1983 in Cambridge, Massachusetts.
An early voice concerned about the social and ethical implications of modern genetic technologies, CRG organized a 1985 Congressional Briefing and a 1986 panel of the American Association for the Advancement of Science, both focusing on the potential dangers of genetically engineered biological weapons. Francis Boyle was asked to draft legislation setting limits on the use of genetic engineering, leading to the Biological Weapons Anti-Terrorism Act of 1989.
CRG was the first organization to advance a comprehensive, scientifically based position against human germline engineering. It was also the first to compile documented cases of genetic discrimination, laying the intellectual groundwork for the Genetic Information Nondiscrimination Act of 2008 (GINA).
The organization created both a Genetic Bill of Rights and a Citizen's Guide to Genetically Modified Food. Also notable are CRG's support for the "Safe Seeds Campaign" (for avoiding gene flow from genetically engineered to non-GE seed) and the organization of a US conference on Forensic DNA Databanks and Racial Disparities in the Criminal Justice System. In 2010 CRG led a successful campaign to roll back a controversial student genetic testing program at the University of California, Berkeley. In 2011, CRG led a campaign to successfully enact [CalGINA] in California, which extended genetic privacy and nondiscrimination protections to life, disability and long term care insurance, mortgages, lending and other areas.
CRG issued five anthologies of commentaries:
Rights and Liberties in the Biotech Age edited by Sheldon Krimsky and Peter Shorett
Race and the Genetic Revolution: Science, Myth and Culture
Genetic Explanations: Sense and Nonsense edited by Krimsky and Jeremy Gruber
Biotechnology in our Lives edited by Krimsky and Gruber
The GMO Deception edited by Krimsky and Gruber
Principles and projects
CRG "fosters public debate about the social, ethical and environmental implications of genetic technologies." They list three central principles:
The public must have access to clear and understandable information on technological innovations.
The public must be able to participate in public and private decision making concerning technological developments and their implementation.
New technologies must meet social needs. Problems rooted in poverty, racism, and other forms of inequality, according to CRG, cannot be remedied by technology alone.
In 2007, CRG hosted a retreat to refresh the mission statement and determine goals for the future of the organization. The outcome was that CRG should:
Explore and document developments in biotechnology through a holistic approach that considers science within a social, cultural, ethical, and environmental context.
Serve as a global knowledge resource, providing information and education about the potential impact of new and emerging biotechnologies.
Develop concrete policy solutions to address what CRG feels are emerging issues in biotechnology.
Mobilize and collaborate with scientists and other organizations to inform the public and promote democratic control of science.
Expose what CRG views as over-simplified and distorted claims regarding the role of genetics in human disease, development and behavior.
The pioneering contributions of CRG to public interest initiatives concerned with appropriate use of biotechnologies are recounted in the book Biotech Juggernaut: Hope, Hype, and Hidden Agendas of Entrepreneurial Bioscience (Routledge, 2019).
GeneWatch
The CRG publishes Genewatch, America's first and (according to CRG in 2009) only magazine dedicated to monitoring biotechnology's social, ethical and environmental consequences. The publication covers a broad spectrum of issues, from genetically modified food to biological weapons, genetic privacy and discrimination, reproductive technology, and human cloning. Established in 1983, the publication won the Utne Independent Press Award for General Excellence in the category of newsletters in 2006.
Funding
A major source of CRG's funding is the Ford Foundation, which provided $420,000 in grants during 2005-2007.
See also
Bioethics
Genomics
References
External links
Appropriate technology organizations
Medical and health organizations based in Massachusetts
Biotechnology organizations
Genetics organizations
1983 establishments in Massachusetts
1983 establishments in the United States
Organizations established in 1983 | Council for Responsible Genetics | [
"Engineering",
"Biology"
] | 842 | [
"Biotechnology organizations"
] |
3,115,869 | https://en.wikipedia.org/wiki/Tverberg%27s%20theorem | In discrete geometry, Tverberg's theorem, first stated by Helge Tverberg in 1966, is the result that sufficiently many points in Euclidean space can be partitioned into subsets with intersecting convex hulls. Specifically, for any positive integers d, r and any set of
points in d-dimensional Euclidean space there exists a partition of the given points into r subsets whose convex hulls all have a common point; in other words, there exists a point x (not necessarily one of the given points) such that x belongs to the convex hull of all of the subsets.
The partition resulting from this theorem is known as a Tverberg partition.
The special case r = 2 was proved earlier by Radon, and it is known as Radon's theorem.
Examples
The case d = 1 states that any 2r−1 points on the real line can be partitioned into r subsets with intersecting convex hulls. Indeed, if the points are x1 < x2 < ... < x2r < x2r-1, then the partition into Ai = {xi, x2r-i} for i in 1,...,r satisfies this condition (and it is unique).
For r = 2, Tverberg's theorem states that any d + 2 points may be partitioned into two subsets with intersecting convex hulls. This is known as Radon's theorem. In this case, for points in general position, the partition is unique.
The case r = 3 and d = 2 states that any seven points in the plane may be partitioned into three subsets with intersecting convex hulls. The illustration shows an example in which the seven points are the vertices of a regular heptagon. As the example shows, there may be many different Tverberg partitions of the same set of points; these seven points may be partitioned in seven different ways that differ by rotations of each other.
Topological Tverberg Theorem
An equivalent formulation of Tverberg's theorem is:Let d, r be positive integers, and let N := (d+1)(r-1). If ƒ is any affine function from an N-dimensional simplex ΔN to Rd, then there are r pairwise-disjoint faces of ΔN whose images under ƒ intersect. That is: there exist faces F1,...,Fr of ΔN such that and .They are equivalent because any affine function on a simplex is uniquely determined by the images of its vertices. Formally, let ƒ be an affine function from ΔN to Rd. Let be the vertices of ΔN, and let be their images under ƒ. By the original formulation, the can be partitioned into r disjoint subsets, e.g. ((xi)i in Aj)j in [r] with overlapping convex hull. Because f is affine, the convex hull of (xi)i in Aj is the image of the face spanned by the vertices (vi)i in Aj for all j in [r]. These faces are pairwise-disjoint, and their images under f intersect - as claimed by the new formulation.
The topological Tverberg theorem generalizes this formluation. It allows f to be any continuous function - not necessarily affine. But, currently it is proved only for the case where r is a prime power:Let d be a positive integer, and let r be a power of a prime number. Let N := (d+1)(r-1). If ƒ is any continuous function from an N-dimensional simplex ΔN to Rd, then there are r pairwise-disjoint faces of ΔN whose images under ƒ intersect. That is: there exist faces F1,...,Fr of ΔN such that and .
Proofs
The topological Tverberg theorem was proved for prime by Barany, Shlosman and Szucs. Matousek presents a proof using deleted joins.
The theorem was proved for a prime-power by Ozaydin, and later by Volovikov and Sarkaria.
See also
Rota's basis conjecture
References
Further reading
Theorems in convex geometry
Theorems in discrete geometry
Geometric transversal theory
Convex hulls | Tverberg's theorem | [
"Mathematics"
] | 890 | [
"Geometric transversal theory",
"Theorems in convex geometry",
"Theorems in discrete mathematics",
"Basic concepts in set theory",
"Families of sets",
"Theorems in geometry",
"Theorems in discrete geometry"
] |
3,115,920 | https://en.wikipedia.org/wiki/Urca%20process | In astroparticle physics, an Urca process is a reaction which emits a neutrino and which is assumed to take part in cooling processes in neutron stars and white dwarfs. The process was first discussed by George Gamow and Mário Schenberg while they were visiting a casino named Cassino da Urca in Urca, Rio de Janeiro. As Gamow recounts in his autobiography, the name was chosen in part to commemorate the gambling establishment where the two physicists had first met, and "partially because the Urca Process results in a rapid disappearance of thermal energy from the interior of a star, similar to the rapid disappearance of money from the pockets of the gamblers on the Casino de Urca." In Gamow's South Russian dialect, urca () can also mean a robber or gangster.
The direct Urca processes are the simplest neutrino-emitting processes and are thought to be central in the cooling of neutron stars. They have the general form
{|
| B || || || → || B || + || || + || ,
|------------------------------------------
| B || + || || → || B || + ||,
|}
where B and B are baryons, is a lepton, and (and ) are (anti-)neutrinos. The baryons can be nucleons (free or bound), hyperons like , and , or members of the isobar. The lepton is either an electron or a muon.
The Urca process is especially important in the cooling of white dwarfs, where a lepton (usually an electron) is absorbed by the nucleus of an ion and then convectively carried away from the core of a star. Then, a beta decay occurs. Convection then carries the element back into the interior of the star, and the cycle repeats many times. Because the neutrinos emitted during this process are unlikely to be reabsorbed, this is effectively a cooling mechanism for white dwarfs.
The process can also be essential in the cooling of neutron stars. If a neutron star contains a central core in which the direct Urca-process is operative, the cooling timescale shortens by many orders of magnitude.
References
Concepts in astrophysics | Urca process | [
"Physics"
] | 513 | [
"Concepts in astrophysics",
"Astrophysics"
] |
3,116,066 | https://en.wikipedia.org/wiki/Colure | Colure, in astronomy, is either of the two principal meridians of the celestial sphere. The term is now rarely used and may be considered obsolete.
Equinoctial colure
The equinoctial colure is the meridian or great circle of the celestial sphere which passes through the celestial poles and the two equinoxes: the first point of Aries and the first point of Libra. It is the great circle consisting of all points on the celestial sphere with Right Ascension equal to 0 hours or 12 hours (equivalent to RA 0° / 180°).
The equinoctial colure passes through the following constellations:
Solstitial colure
The solstitial colure is the meridian or great circle of the celestial sphere which passes through the poles and the two solstices: the first point of Cancer and the first point of Capricorn. It is the great circle consisting of all points on the celestial sphere with Right Ascension equal to 6 hours or 18 hours (equivalent to RA 90° / 270°).
The solstitial colure passes through the following constellations:
See also
Celestial coordinate system
Ecliptic
Celestial sphere
Right ascension
Equinox
Solstice
References
Kaler, Jim. "Pi Aurigae." Pi Aurigae. N.p. 22 Feb. 2008. Web.
Astronomical coordinate systems
Solstices | Colure | [
"Astronomy",
"Mathematics"
] | 281 | [
"Time in astronomy",
"Astronomy stubs",
"Astronomical coordinate systems",
"Solstices",
"Coordinate systems"
] |
3,116,318 | https://en.wikipedia.org/wiki/Cartan%20formula | In mathematics, Cartan formula can mean:
one in differential geometry: , where , and are Lie derivative, exterior derivative, and interior product, respectively, acting on differential forms. See interior product for the detail. It is also called the Cartan homotopy formula or Cartan magic formula. This formula is named after Élie Cartan.
one in algebraic topology, which is one of the five axioms of Steenrod algebra. It reads:
.
See Steenrod algebra for the detail. The name derives from Henri Cartan, son of Élie.
Footnotes
See also
List of things named after Élie Cartan | Cartan formula | [
"Mathematics",
"Engineering"
] | 130 | [
"Theorems in differential geometry",
"Mathematical theorems",
"Mathematical structures",
"Tensors",
"Algebraic topology",
"Theorems in topology",
"Differential forms",
"Fields of abstract algebra",
"Topology",
"Category theory",
"Theorems in geometry",
"Mathematical identities",
"Mathematica... |
3,117,016 | https://en.wikipedia.org/wiki/List%20of%20textbooks%20in%20thermodynamics%20and%20statistical%20mechanics | A list of notable textbooks in thermodynamics and statistical mechanics, arranged by category and date.
Only or mainly thermodynamics
Both thermodynamics and statistical mechanics
2e Kittel, Charles; and Kroemer, Herbert (1980) New York: W.H. Freeman
2e (1988) Chichester: Wiley , .
(1990) New York: Dover
Stephen G. Brush (1976) The Kind of Motion We Call Heat I-II North-Holland ISBN 0-444-87008-3
Statistical mechanics
. 2e (1936) Cambridge: University Press; (1980) Cambridge University Press.
; (1979) New York: Dover
Vol. 5 of the Course of Theoretical Physics. 3e (1976) Translated by J.B. Sykes and M.J. Kearsley (1980) Oxford : Pergamon Press.
. 3e (1995) Oxford: Butterworth-Heinemann
. 2e (1987) New York: Wiley
. 2e (1988) Amsterdam: North-Holland . 2e (1991) Berlin: Springer Verlag ,
; (2005) New York: Dover
2e (2000) Sausalito, Calif.: University Science
2e (1998) Chichester: Wiley
S. R. De Groot, P. Mazur (2011) Non-Equilibrium Thermodynamics, Dover Books on Physics, ISBN 978-0486647418.
Specialized topics
Kinetic theory
Vol. 10 of the Course of Theoretical Physics (3rd Ed). Translated by J.B. Sykes and R.N. Franklin (1981) London: Pergamon ,
Quantum statistical mechanics
Mathematics of statistical mechanics
Translated by G. Gamow (1949) New York: Dover
. Reissued (1974), (1989); (1999) Singapore: World Scientific
; (1984) Cambridge: University Press . 2e (2004) Cambridge: University Press
Miscellaneous
(available online here)
Historical
(1896, 1898) Translated by Stephen G. Brush (1964) Berkeley: University of California Press; (1995) New York: Dover
Translated by J. Kestin (1956) New York: Academic Press.
German Encyclopedia of Mathematical Sciences. Translated by Michael J. Moravcsik (1959) Ithaca: Cornell University Press; (1990) New York: Dover
See also
List of textbooks on classical mechanics and quantum mechanics
List of textbooks in electromagnetism
List of books on general relativity
Further reading
References
External links
Statistical Mechanics and Thermodynamics Texts Clark University curriculum development project
Lists of science textbooks
Mathematics-related lists
Physics-related lists
Textbooks
Textbooks | List of textbooks in thermodynamics and statistical mechanics | [
"Physics"
] | 535 | [
"Statistical mechanics"
] |
3,117,887 | https://en.wikipedia.org/wiki/Grassmann%20number | In mathematical physics, a Grassmann number, named after Hermann Grassmann (also called an anticommuting number or supernumber), is an element of the exterior algebra of a complex vector space. The special case of a 1-dimensional algebra is known as a dual number. Grassmann numbers saw an early use in physics to express a path integral representation for fermionic fields, although they are now widely used as a foundation for superspace, on which supersymmetry is constructed.
Informal discussion
Grassmann numbers are generated by anti-commuting elements or objects. The idea of anti-commuting objects arises in multiple areas of mathematics: they are typically seen in differential geometry, where the differential forms are anti-commuting. Differential forms are normally defined in terms of derivatives on a manifold; however, one can contemplate the situation where one "forgets" or "ignores" the existence of any underlying manifold, and "forgets" or "ignores" that the forms were defined as derivatives, and instead, simply contemplate a situation where one has objects that anti-commute, and have no other pre-defined or presupposed properties. Such objects form an algebra, and specifically the Grassmann algebra or exterior algebra.
The Grassmann numbers are elements of that algebra. The appellation of "number" is justified by the fact that they behave not unlike "ordinary" numbers: they can be added, multiplied and divided: they behave almost like a field. More can be done: one can consider polynomials of Grassmann numbers, leading to the idea of holomorphic functions. One can take derivatives of such functions, and then consider the anti-derivatives as well. Each of these ideas can be carefully defined, and correspond reasonably well to the equivalent concepts from ordinary mathematics. The analogy does not stop there: one has an entire branch of supermathematics, where the analog of Euclidean space is superspace, the analog of a manifold is a supermanifold, the analog of a Lie algebra is a Lie superalgebra and so on. The Grassmann numbers are the underlying construct that make this all possible.
Of course, one could pursue a similar program for any other field, or even ring, and this is indeed widely and commonly done in mathematics. However, supermathematics takes on a special significance in physics, because the anti-commuting behavior can be strongly identified with the quantum-mechanical behavior of fermions: the anti-commutation is that of the Pauli exclusion principle. Thus, the study of Grassmann numbers, and of supermathematics, in general, is strongly driven by their utility in physics.
Specifically, in quantum field theory, or more narrowly, second quantization, one works with ladder operators that create multi-particle quantum states. The ladder operators for fermions create field quanta that must necessarily have anti-symmetric wave functions, as this is forced by the Pauli exclusion principle. In this situation, a Grassmann number corresponds immediately and directly to a wave function that contains some (typically indeterminate) number of fermions.
When the number of fermions is fixed and finite, an explicit relationship between anticommutation relations and spinors is given by means of the spin group. This group can be defined as the subset of unit-length vectors in the Clifford algebra, and naturally factorizes into anti-commuting Weyl spinors. Both the anti-commutation and the expression as spinors arises in a natural fashion for the spin group. In essence, the Grassmann numbers can be thought of as discarding the relationships arising from spin, and keeping only the relationships due to anti-commutation.
General description and properties
Grassmann numbers are individual elements or points of the exterior algebra generated by a set of Grassmann variables or Grassmann directions or supercharges , with possibly being infinite. The usage of the term "Grassmann variables" is historic; they are not variables, per se; they are better understood as the basis elements of a unital algebra. The terminology comes from the fact that a primary use is to define integrals, and that the variable of integration is Grassmann-valued, and thus, by abuse of language, is called a Grassmann variable. Similarly, the notion of direction comes from the notion of superspace, where ordinary Euclidean space is extended with additional Grassmann-valued "directions". The appellation of charge comes from the notion of charges in physics, which correspond to the generators of physical symmetries (via Noether's theorem). The perceived symmetry is that multiplication by a single Grassmann variable swaps the grading between fermions and bosons; this is discussed in greater detail below.
The Grassmann variables are the basis vectors of a vector space (of dimension ). They form an algebra over a field, with the field usually being taken to be the complex numbers, although one could contemplate other fields, such as the reals. The algebra is a unital algebra, and the generators are anti-commuting:
Since the are elements of a vector space over the complex numbers, they, by definition, commute with complex numbers. That is, for complex , one has
The squares of the generators vanish:
since
In other words, a Grassmann variable is a non-zero square-root of zero.
Formal definition
Formally, let be an -dimensional complex vector space with basis . The Grassmann algebra whose Grassmann variables are is defined to be the exterior algebra of , namely
where is the exterior product and is the direct sum. The individual elements of this algebra are then called Grassmann numbers. It is standard to omit the wedge symbol when writing a Grassmann number once the definition is established. A general Grassmann number can be written as
where are strictly increasing -tuples with , and the are complex, completely antisymmetric tensors of rank . Again, the , and the (subject to ), and larger finite products, can be seen here to be playing the role of a basis vectors of subspaces of .
The Grassmann algebra generated by linearly independent Grassmann variables has dimension ; this follows from the binomial theorem applied to the above sum, and the fact that the -fold product of variables must vanish, by the anti-commutation relations above. The dimension of is given by choose , the binomial coefficient. The special case of is called a dual number, and was introduced by William Clifford in 1873.
In case is infinite-dimensional, the above series does not terminate and one defines
The general element is now
where is sometimes referred to as the body and as the soul of the supernumber .
Properties
In the finite-dimensional case (using the same terminology) the soul is nilpotent, i.e.
but this is not necessarily so in the infinite-dimensional case.
If is finite-dimensional, then
and if is infinite-dimensional
Finite vs. countable sets of generators
Two distinct kinds of supernumbers commonly appear in the literature: those with a finite number of generators, typically = 1, 2, 3 or 4, and those with a countably-infinite number of generators. These two situations are not as unrelated as they may seem at first. First, in the definition of a supermanifold, one variant uses a countably-infinite number of generators, but then employs a topology that effectively reduces the dimension to a small finite number.
In the other case, one may start with a finite number of generators, but in the course of second quantization, a need for an infinite number of generators arises: one each for every possible momentum that a fermion might carry.
Involution, choice of field
The complex numbers are usually chosen as the field for the definition of the Grassmann numbers, as opposed to the real numbers, as this avoids some strange behaviors when a conjugation or involution is introduced. It is common to introduce an operator * on the Grassmann numbers such that:
when is a generator, and such that
One may then consider Grassmann numbers z for which , and term these (super) real, while those that obey are termed (super) imaginary. These definitions carry through just fine, even if the Grassmann numbers use the real numbers as the base field; however, in such a case, many coefficients are forced to vanish if the number of generators is less than 4. Thus, by convention, the Grassmann numbers are usually defined over the complex numbers.
Other conventions are possible; the above is sometimes referred to as the DeWitt convention; Rogers employs for the involution. In this convention, the real supernumbers always have real coefficients; whereas in the DeWitt convention, the real supernumbers may have both real and imaginary coefficients. Despite this, it is usually easiest to work with the DeWitt convention.
Analysis
Products of an odd number of Grassmann variables anti-commute with each other; such a product is often called an a-number. Products of an even number of Grassmann variables commute (with all Grassman numbers); they are often called c-numbers. By abuse of terminology, an a-number is sometimes called an anticommuting c-number. This decomposition into even and odd subspaces provides a grading on the algebra; thus Grassmann algebras are the prototypical examples of supercommutative algebras. Note that the c-numbers form a subalgebra of , but the a-numbers do not (they are a subspace, not a subalgebra).
The definition of Grassmann numbers allows mathematical analysis to be performed, in analogy to analysis on complex numbers. That is, one may define superholomorphic functions, define derivatives, as well as defining integrals. Some of the basic concepts are developed in greater detail in the article on dual numbers.
As a general rule, it is usually easier to define the super-symmetric analogs of ordinary mathematical entities by working with Grassmann numbers with an infinite number of generators: most definitions become straightforward, and can be taken over from the corresponding bosonic definitions. For example, a single Grassmann number can be thought of as generating a one-dimensional space. A vector space, the -dimensional superspace, then appears as the -fold Cartesian product of these one-dimensional It can be shown that this is essentially equivalent to an algebra with generators, but this requires work.
Spinor space
The spinor space is defined as the Grassmann or exterior algebra of the space of Weyl spinors (and anti-spinors ), such that the wave functions of n fermions belong in .
Integration
Integrals over Grassmann numbers are known as Berezin integrals (sometimes called Grassmann integrals). In order to reproduce the path integral for a Fermi field, the definition of Grassmann integration needs to have the following properties:
linearity
partial integration formula
Moreover, the Taylor expansion of any function terminates after two terms because , and quantum field theory additionally require invariance under the shift of integration variables such that
The only linear function satisfying this condition is a constant (conventionally 1) times , so Berezin defined
This results in the following rules for the integration of a Grassmann quantity:
Thus we conclude that the operations of integration and differentiation of a Grassmann number are identical.
In the path integral formulation of quantum field theory the following Gaussian integral of Grassmann quantities is needed for fermionic anticommuting fields, with A being an N × N matrix:
.
Conventions and complex integration
An ambiguity arises when integrating over multiple Grassmann numbers. The convention that performs the innermost integral first yields
Some authors also define complex conjugation similar to Hermitian conjugation of operators,
With the additional convention
we can treat and as independent Grassmann numbers, and adopt
Thus a Gaussian integral evaluates to
and an extra factor of effectively introduces a factor of , just like an ordinary Gaussian,
After proving unitarity, we can evaluate a general Gaussian integral involving a Hermitian matrix with eigenvalues ,
Matrix representations
Grassmann numbers can be represented by matrices. Consider, for example, the Grassmann algebra generated by two Grassmann numbers and . These Grassmann numbers can be represented by 4×4 matrices:
In general, a Grassmann algebra on n generators can be represented by 2n × 2n square matrices. Physically, these matrices can be thought of as raising operators acting on a Hilbert space of n identical fermions in the occupation number basis. Since the occupation number for each fermion is 0 or 1, there are 2n possible basis states. Mathematically, these matrices can be interpreted as the linear operators corresponding to left exterior multiplication on the Grassmann algebra itself.
Generalisations
There are some generalisations to Grassmann numbers. These require rules in terms of N variables such that:
where the indices are summed over all permutations so that as a consequence:
for some N > 2. These are useful for calculating hyperdeterminants of N-tensors where N > 2 and also for calculating discriminants of polynomials for powers larger than 2. There is also the limiting case as N tends to infinity in which case one can define analytic functions on the numbers. For example, in the case with N = 3 a single Grassmann number can be represented by the matrix:
so that . For two Grassmann numbers the matrix would be of size 10×10.
For example, the rules for N = 3 with two Grassmann variables imply:
so that it can be shown that
and so
which gives a definition for the hyperdeterminant of a 2×2×2 tensor as
See also
Grassmannian
Hermann Grassmann (linguist and mathematician)
Superspace
Exterior algebra
Notes
References
Hypercomplex numbers
Supersymmetry
Quantum field theory | Grassmann number | [
"Physics",
"Mathematics"
] | 2,846 | [
"Quantum field theory",
"Mathematical structures",
"Unsolved problems in physics",
"Quantum mechanics",
"Mathematical objects",
"Numbers",
"Algebraic structures",
"Physics beyond the Standard Model",
"Hypercomplex numbers",
"Supersymmetry",
"Symmetry"
] |
3,118,707 | https://en.wikipedia.org/wiki/Wetted%20area | In fluid dynamics, the wetted area is the surface area that interacts with the working fluid or gas.
In maritime use, the wetted area is the area of the watercraft's hull which is immersed in water. This has a direct relationship on the overall hydrodynamic drag of the ship or submarine.
In aeronautics, the wetted area is the area which is in contact with the external airflow. This has a direct relationship on the overall aerodynamic drag of the aircraft. See also: Wetted aspect ratio.
In motorsport, such as Formula One, the term wetted surfaces is used to refer to the bodywork, wings and the radiator, which are in direct contact with the airflow, similarly to the term's use in aeronautics.
References
Intake Aerodynamics (October 1999) by Seddon and Goldsmith, Blackwell Science and the AIAA Educational Series; 2nd edition
Naval architecture
Aerodynamics | Wetted area | [
"Chemistry",
"Engineering"
] | 188 | [
"Naval architecture",
"Aerodynamics",
"Aerospace engineering",
"Marine engineering",
"Fluid dynamics"
] |
12,044,399 | https://en.wikipedia.org/wiki/List%20decoding | In coding theory, list decoding is an alternative to unique decoding of error-correcting codes for large error rates. The notion was proposed by Elias in the 1950s. The main idea behind list decoding is that the decoding algorithm instead of outputting a single possible message outputs a list of possibilities one of which is correct. This allows for handling a greater number of errors than that allowed by unique decoding.
The unique decoding model in coding theory, which is constrained to output a single valid codeword from the received word could not tolerate a greater fraction of errors. This resulted in a gap between the error-correction performance for stochastic noise models (proposed by Shannon) and the adversarial noise model (considered by Richard Hamming). Since the mid 90s, significant algorithmic progress by the coding theory community has bridged this gap. Much of this progress is based on a relaxed error-correction model called list decoding, wherein the decoder outputs a list of codewords for worst-case pathological error patterns where the actual transmitted codeword is included in the output list. In case of typical error patterns though, the decoder outputs a unique single codeword, given a received word, which is almost always the case (However, this is not known to be true for all codes). The improvement here is significant in that the error-correction performance doubles. This is because now the decoder is not confined by the half-the-minimum distance barrier. This model is very appealing because having a list of codewords is certainly better than just giving up. The notion of list-decoding has many interesting applications in complexity theory.
The way the channel noise is modeled plays a crucial role in that it governs the rate at which reliable communication is possible. There are two main schools of thought in modeling the channel behavior:
Probabilistic noise model studied by Shannon in which the channel noise is modeled precisely in the sense that the probabilistic behavior of the channel is well known and the probability of occurrence of too many or too few errors is low
Worst-case or adversarial noise model considered by Hamming in which the channel acts as an adversary that arbitrarily corrupts the codeword subject to a bound on the total number of errors.
The highlight of list-decoding is that even under adversarial noise conditions, it is possible to achieve the information-theoretic optimal trade-off between rate and fraction of errors that can be corrected. Hence, in a sense this is like improving the error-correction performance to that possible in case of a weaker, stochastic noise model.
Mathematical formulation
Let be a error-correcting code; in other words, is a code of length , dimension and minimum distance over an alphabet of size . The list-decoding problem can now be formulated as follows:
Input: Received word , error bound
Output: A list of all codewords whose hamming distance from is at most .
Motivation for list decoding
Given a received word , which is a noisy version of some transmitted codeword , the decoder tries to output the transmitted codeword by placing its bet on a codeword that is “nearest” to the received word. The Hamming distance between two codewords is used as a metric in finding the nearest codeword, given the received word by the decoder. If is the minimum Hamming distance of a code , then there exists two codewords and that differ in exactly positions. Now, in the case where the received word is equidistant from the codewords and , unambiguous decoding becomes impossible as the decoder cannot decide which one of and to output as the original transmitted codeword. As a result, the half-the minimum distance acts as a combinatorial barrier beyond which unambiguous error-correction is impossible, if we only insist on unique decoding. However, received words such as considered above occur only in the worst-case and if one looks at the way Hamming balls are packed in high-dimensional space, even for error patterns beyond half-the minimum distance, there is only a single codeword within Hamming distance from the received word. This claim has been shown to hold with high probability for a random code picked from a natural ensemble and more so for the case of Reed–Solomon codes which is well studied and quite ubiquitous in the real world applications. In fact, Shannon's proof of the capacity theorem for q-ary symmetric channels can be viewed in light of the above claim for random codes.
Under the mandate of list-decoding, for worst-case errors, the decoder is allowed to output a small list of codewords. With some context specific or side information, it may be possible to prune the list and recover the original transmitted codeword. Hence, in general, this seems to be a stronger error-recovery model than unique decoding.
List-decoding potential
For a polynomial-time list-decoding algorithm to exist, we need the combinatorial guarantee that any Hamming ball of radius around a received word (where is the fraction of errors in terms of the block length ) has a small number of codewords. This is because the list size itself is clearly a lower bound on the running time of the algorithm. Hence, we require the list size to be a polynomial in the block length of the code. A combinatorial consequence of this requirement is that it imposes an upper bound on the rate of a code. List decoding promises to meet this upper bound. It has been shown non-constructively that codes of rate exist that can be list decoded up to a fraction of errors approaching . The quantity is referred to in the literature as the list-decoding capacity. This is a substantial gain compared to the unique decoding model as we now have the potential to correct twice as many errors. Naturally, we need to have at least a fraction of the transmitted symbols to be correct in order to recover the message. This is an information-theoretic lower bound on the number of correct symbols required to perform decoding and with list decoding, we can potentially achieve this information-theoretic limit. However, to realize this potential, we need explicit codes (codes that can be constructed in polynomial time) and efficient algorithms to perform encoding and decoding.
(p, L)-list-decodability
For any error fraction and an integer , a code is said to be list decodable up to a fraction of errors with list size at most or -list-decodable if for every , the number of codewords within Hamming distance from is at most
Combinatorics of list decoding
The relation between list decodability of a code and other fundamental parameters such as minimum distance and rate have been fairly well studied. It has been shown that every code can be list decoded using small lists beyond half the minimum distance up to a bound called the Johnson radius. This is quite significant because it proves the existence of -list-decodable codes of good rate with a list-decoding radius much larger than In other words, the Johnson bound rules out the possibility of having a large number of codewords in a Hamming ball of radius slightly greater than which means that it is possible to correct far more errors with list decoding.
List-decoding capacity
Theorem (List-Decoding Capacity). Let and The following two statements hold for large enough block length .
i) If , then there exists a -list decodable code.
ii) If , then every -list-decodable code has .
Where
is the -ary entropy function defined for and extended by continuity to
What this means is that for rates approaching the channel capacity, there exists list decodable codes with polynomial sized lists enabling efficient decoding algorithms whereas for rates exceeding the channel capacity, the list size becomes exponential which rules out the existence of efficient decoding algorithms.
The proof for list-decoding capacity is a significant one in that it exactly matches the capacity of a -ary symmetric channel . In fact, the term "list-decoding capacity" should actually be read as the capacity of an adversarial channel under list decoding. Also, the proof for list-decoding capacity is an important result that pin points the optimal trade-off between rate of a code and the fraction of errors that can be corrected under list decoding.
Sketch of proof
The idea behind the proof is similar to that of Shannon's proof for capacity of the binary symmetric channel where a random code is picked and showing that it is -list-decodable with high probability as long as the rate For rates exceeding the above quantity, it can be shown that the list size becomes super-polynomially large.
A "bad" event is defined as one in which, given a received word and messages it so happens that , for every where is the fraction of errors that we wish to correct and is the Hamming ball of radius with the received word as the center.
Now, the probability that a codeword associated with a fixed message lies in a Hamming ball is given by
where the quantity is the volume of a Hamming ball of radius with the received word as the center. The inequality in the above relation follows from the upper bound on the volume of a Hamming ball. The quantity gives a very good estimate on the volume of a Hamming ball of radius centered on any word in Put another way, the volume of a Hamming ball is translation invariant. To continue with the proof sketch, we conjure the union bound in probability theory which tells us that the probability of a bad event happening for a given is upper bounded by the quantity .
With the above in mind, the probability of "any" bad event happening can be shown to be less than . To show this, we work our way over all possible received words and every possible subset of messages in
Now turning to the proof of part (ii), we need to show that there are super-polynomially many codewords around every when the rate exceeds the list-decoding capacity. We need to show that is super-polynomially large if the rate . Fix a codeword . Now, for every picked at random, we have
since Hamming balls are translation invariant. From the definition of the volume of a Hamming ball and the fact that is chosen uniformly at random from we also have
Let us now define an indicator variable such that
Taking the expectation of the volume of a Hamming ball we have
Therefore, by the probabilistic method, we have shown that if the rate exceeds the list-decoding capacity, then the list size becomes super-polynomially large. This completes the proof sketch for the list-decoding capacity.
List decodability of Reed-Solomon Codes
In 2023, building upon three seminal works, coding theorists showed that, with high probability, Reed-Solomon codes defined over random evaluation points are list decodable up to the list-decoding capacity over linear size alphabets.
List-decoding algorithms
In the period from 1995 to 2007, the coding theory community developed progressively more efficient list-decoding algorithms. Algorithms for Reed–Solomon codes that can decode up to the Johnson radius which is exist where is the normalised distance or relative distance. However, for Reed-Solomon codes, which means a fraction of errors can be corrected. Some of the most prominent list-decoding algorithms are the following:
Sudan '95 – The first known non-trivial list-decoding algorithm for Reed–Solomon codes that achieved efficient list decoding up to errors developed by Madhu Sudan.
Guruswami–Sudan '98 – An improvement on the above algorithm for list decoding Reed–Solomon codes up to errors by Madhu Sudan and his then doctoral student Venkatesan Guruswami.
Parvaresh–Vardy '05 – In a breakthrough paper, Farzad Parvaresh and Alexander Vardy presented codes that can be list decoded beyond the radius for low rates . Their codes are variants of Reed-Solomon codes which are obtained by evaluating correlated polynomials instead of just as in the case of usual Reed-Solomon codes.
Guruswami–Rudra '06 - In yet another breakthrough, Venkatesan Guruswami and Atri Rudra give explicit codes that achieve list-decoding capacity, that is, they can be list decoded up to the radius for any . In other words, this is error-correction with optimal redundancy. This answered a question that had been open for about 50 years. This work has been invited to the Research Highlights section of the Communications of the ACM (which is “devoted to the most important research results published in Computer Science in recent years”) and was mentioned in an article titled “Coding and Computing Join Forces” in the Sep 21, 2007 issue of the Science magazine. The codes that they are given are called folded Reed-Solomon codes which are nothing but plain Reed-Solomon codes but viewed as a code over a larger alphabet by careful bundling of codeword symbols.
Because of their ubiquity and the nice algebraic properties they possess, list-decoding algorithms for Reed–Solomon codes were a main focus of researchers. The list-decoding problem for Reed–Solomon codes can be formulated as follows:
Input: For an Reed-Solomon code, we are given the pair for , where is the th bit of the received word and the 's are distinct points in the finite field and an error parameter .
Output: The goal is to find all the polynomials of degree at most which is the message length such that for at least values of . Here, we would like to have as small as possible so that a greater number of errors can be tolerated.
With the above formulation, the general structure of list-decoding algorithms for Reed-Solomon codes is as follows:
Step 1: (Interpolation) Find a non-zero bivariate polynomial such that for .
Step 2: (Root finding/Factorization) Output all degree polynomials such that is a factor of i.e. . For each of these polynomials, check if for at least values of . If so, include such a polynomial in the output list.
Given the fact that bivariate polynomials can be factored efficiently, the above algorithm runs in polynomial time.
Applications in complexity theory and cryptography
Algorithms developed for list decoding of several interesting code families have found interesting applications in computational complexity and the field of cryptography. Following is a sample list of applications outside of coding theory:
Construction of hard-core predicates from one-way permutations.
Predicting witnesses for NP-search problems.
Amplifying hardness of Boolean functions.
Average case hardness of permanent of random matrices.
Extractors and Pseudorandom generators.
Efficient traitor tracing.
References
External links
A Survey on list decoding by Madhu Sudan
Notes from a course taught by Madhu Sudan
Notes from a course taught by Luca Trevisan
Notes from a course taught by Venkatesan Guruswami
Notes from a course taught by Atri Rudra
P. Elias, "List decoding for noisy channels," Technical Report 335, Research Laboratory of Electronics, MIT, 1957.
P. Elias, "Error-correcting codes for list decoding," IEEE Transactions on Information Theory, vol. 37, pp. 5–12, 1991.
J. M. Wozencraft, "List decoding," Quarterly Progress Report, Research Laboratory of Electronics, MIT, vol. 48, pp. 90–95, 1958.
Venkatesan Guruswami's PhD thesis
Algorithmic Results in List Decoding
Folded Reed–Solomon code
Coding theory
Error detection and correction
Computational complexity theory | List decoding | [
"Mathematics",
"Engineering"
] | 3,196 | [
"Discrete mathematics",
"Coding theory",
"Reliability engineering",
"Error detection and correction"
] |
12,045,078 | https://en.wikipedia.org/wiki/Excision%20repair%20cross-complementing | Excision repair cross-complementing (ERCC) is a set of proteins which are involved in DNA repair.
In humans, ERCC proteins are transcribed from the following genes:
ERCC1, ERCC2, ERCC3, ERCC4, ERCC5, ERCC6, and ERCC8.
Members 1 though 5 are associated with Xeroderma Pigmentosum.
Members 6 and 8 are associated with Cockayne syndrome.
References
DNA repair | Excision repair cross-complementing | [
"Chemistry",
"Biology"
] | 97 | [
"DNA repair",
"Protein stubs",
"Biochemistry stubs",
"Molecular genetics",
"Cellular processes"
] |
12,045,114 | https://en.wikipedia.org/wiki/Flory%20convention | In polymer science, the Flory convention is a convention for labelling rotational isomers of polymers. It is named after Nobel Prize-winning Paul Flory.
The convention states that for a given bond, when the dihedral angle formed between the previous and subsequent bonds projected on the plane normal to the bond is 0 degrees, the state is labelled as "trans", and when the angle is 180 degrees, the angle is labelled as "cis".
References
Biophysics | Flory convention | [
"Physics",
"Biology"
] | 96 | [
"Applied and interdisciplinary physics",
"Biophysics"
] |
12,045,475 | https://en.wikipedia.org/wiki/T-box | T-box refers to a group of transcription factors involved in embryonic limb and heart development. Every T-box protein has a relatively large DNA-binding domain, generally comprising about a third of the entire protein that is both necessary and sufficient for sequence-specific DNA binding. All members of the T-box gene family bind to the "T-box", a DNA consensus sequence of TCACACCT.
Members
T-boxes are especially important to the development of embryos, found in zebrafish oocyte by Bruce et al 2003 and Xenopus laevis oocyte by Xanthos et al 2001. They are also expressed in later stages, including adult mouse and rabbit studied by Szabo et al 2000.
Mutations in the first one found caused short tails in mice, and thus the protein encoded was named brachyury, Greek for "short-tail". In mice this gene is named Tbxt, and in humans it is named TBXT. Brachyury has been found in all bilaterian animals that have been screened, and is also present in the cnidaria.
The mouse Tbxt gene was cloned and found to be a 436 amino acid embryonic nuclear transcription factor. The protein brachyury binds to the T-box through a region at its N-terminus.
Protein activity
The encoded proteins of TBX5 and TBX4 play a role in limb development, and play a major role in limb bud initiation specifically. For instance, in chickens TBX4 specifies hindlimb status while Tbx5 specifies forelimb status. The activation of these proteins by Hox genes initiates signaling cascades that involve the Wnt signaling pathway and FGF signals in limb buds. Ultimately, TBX4 and TBX5 lead to the development of apical ectodermal ridge (AER) and zone of polarizing activity (ZPA) signaling centers in the developing limb bud, which specify the orientation growth of the developing limb. Together, TBX5 and TBX4 play a role in patterning the soft tissues (muscles and tendons) of the musculoskeletal system.
Defects
In humans, and some other animals, defects in the TBX5 gene expression are responsible for Holt–Oram syndrome, which is characterized by at least one abnormal wrist bone. Other arm bones are almost always affected, though the severity can vary widely, from complete absence of a bone, to only a reduction in bone length. Seventy-five percent of affected individuals also have heart defects, most often there is no separation between the left and right ventricle of the heart.
TBX3 is associated with ulnar–mammary syndrome in humans, but is also responsible for the presence or absence of dun color in horses, and has no deleterious effects whether expressed or not.
T-box genes
Genes encoding T-box proteins include:
TBXT () the first found (in mice)
TBR1 ()
TBX1 ()
TBX2 ()
TBX3 ()
TBX4 ()
TBX5 ()
TBX6 ()
TBX10 ()
TBX15 ()
TBX18 ()
TBX19 ()
TBX20 ()
TBX21 ()
TBX22 ()
See also
Brachyury
Homeobox
Development genes in plants: G-box, I-box, W-box, Z-box
References
Further reading
External links
Transcription factors | T-box | [
"Chemistry",
"Biology"
] | 707 | [
"Protein stubs",
"Gene expression",
"Signal transduction",
"Biochemistry stubs",
"Induced stem cells",
"Transcription factors"
] |
12,046,704 | https://en.wikipedia.org/wiki/Phosphatidylglycerol | Phosphatidylglycerol is a glycerophospholipid found in pulmonary surfactant and in the plasma membrane where it directly activates lipid-gated ion channels.
The general structure of phosphatidylglycerol consists of a L-glycerol 3-phosphate backbone ester-bonded to either saturated or unsaturated fatty acids on carbons 1 and 2. The head group substituent glycerol is bonded through a phosphomonoester. It is the precursor of surfactant and its presence (>0.3) in the amniotic fluid of the newborn indicates fetal lung maturity.
Approximately 98% of alveolar wall surface area is due to the presence of type I cells, with type II cells producing pulmonary surfactant covering around 2% of the alveolar walls. Once surfactant is secreted by the type II cells, it must be spread over the remaining type I cellular surface area. Phosphatidylglycerol is thought to be important in spreading of surfactant over the Type I cellular surface area. The major surfactant deficiency in premature infants relates to the lack of phosphatidylglycerol, even though it comprises less than 5% of pulmonary surfactant phospholipids. It is synthesized by head group exchange of a phosphatidylcholine enriched phospholipid using the enzyme phospholipase D.
Biosynthesis
Phosphatidylglycerol (PG) is formed via a complex sequential pathway whereby phosphatidic acid (PA) is first converted to CDP-diacylglyceride by the enzyme CDP-diacylglyceride synthase. Then a PGP synthase enzyme exchanges glycerol-3-phosphate (G3P) for cytidine monophosphase (CMP), forming the temporary intermediate phosphatidylglycerolphosphate (PGP). PG is finally synthesized when a PGP phosphatase enzyme catalyzes the immediate dephosphorylation of the PGP intermediate to form PG. In bacteria, another membrane phospholipid known as cardiolipin can be synthesized by condensing two molecules of phosphatidylglycerol; a reaction catalyzed by the enzyme cardiolipin-synthase. In eukaryotic mitochondria phosphatidylglycerol is converted to cardiolipin by reacting with a molecule of cytidine diphosphate diglyceride in a reaction catalyzed by cardiolipin synthase.
See also
Glycerol
Cardiolipin
Lipid-gated ion channels
References
Hostetler KY, van den Bosch H, van Deenen LL. The mechanism of cardiolipin biosynthesis in liver mitochondria. Biochim Biophys Acta. 1972 Mar 23;260(3):507-13. . PMID 4556770.
External links
Phospholipids
Membrane biology | Phosphatidylglycerol | [
"Chemistry"
] | 663 | [
"Phospholipids",
"Molecular biology",
"Membrane biology",
"Signal transduction"
] |
12,052,294 | https://en.wikipedia.org/wiki/Electronic%20differential | In automotive engineering the electronic differential is a form of differential, which provides the required torque for each driving wheel and allows different wheel speeds. It is used in place of the mechanical differential in multi-drive systems. When cornering, the inner and outer wheels rotate at different speeds, because the inner wheels describe a smaller turning radius. The electronic differential uses the steering wheel command signal and the motor speed signals to control the power to each wheel so that all wheels are supplied with the torque they need.
Functional description
The classical automobile drivetrain is composed by a single Internal combustion engine providing torque to one or more driving wheels. The most common solution is to use a mechanical device to distribute torque to the wheels. This mechanical differential allows different wheel speeds when cornering. With the emergence of electric vehicles new drive train configurations are possible. Multi-drive systems become easy to implement due to the large power density of electric motors. These systems, usually with one motor per driving wheel, need an additional top level controller which performs the same task as a mechanical differential.
The ED scheme has several advantages over a mechanical differential:
simplicity - it avoids additional mechanical parts such as a gearbox or clutch;
independent torque for each wheel allows additional capabilities (e.g., traction control, stability control);
reconfigurable - it is reprogrammable in order to include new features or tuned according to the driver’s preferences;
allows distributed regenerative braking;
the torque is not limited by the wheel with least traction, as it is with a mechanical differential.
faster response times;
accurate knowledge of traction torque per wheel.
However, the ED scheme also come with many disadvantages and drawbacks:
errors and glitches are prone to happen, thus giving inaccurate reading and output as compared to conventional differential. These result in the wheels either receiving too much or little power and torque.
increased premature tyre wear due to the inaccurate reading and output as compared to conventional differential.
higher cost to manufacture and maintain the electronic systems.
Applications
Several applications of this technology have proven successful and have increased vehicle performance. The application range is wide and includes the huge T 282B from Liebherr which is considered the world largest truck. This earth-hauling truck is driven by an electric propulsion system composed by two independent electric motors. These motors providing a maximum power of 2700 kW are controlled in order to adjust their speeds when cornering, thus increasing traction and reducing tire wear.
The Eliica is also equipped with electronic differential; this eight-wheeled electric vehicle is capable of driving up to 370 km/h whilst maintaining perfect torque control on each wheel. Smaller vehicles for traction purposes and System on Chip controllers for generic vehicular applications are also available.
References
Automotive transmission technologies
Vehicle technology
Mechanical power control | Electronic differential | [
"Physics",
"Engineering"
] | 556 | [
"Vehicle technology",
"Mechanics",
"Mechanical power control",
"Mechanical engineering by discipline"
] |
12,054,908 | https://en.wikipedia.org/wiki/Layered%20double%20hydroxides | Layered double hydroxides (LDH) are a class of ionic solids characterized by a layered structure with the generic layer sequence [AcB Z AcB]n, where c represents layers of metal cations, A and B are layers of hydroxide () anions, and Z are layers of other anions and neutral molecules (such as water). Lateral offsets between the layers may result in longer repeating periods.
The intercalated anions (Z) are weakly bound, often exchangeable; their intercalation properties have scientific interest and industrial applications.
LDHs occur in nature as minerals, as byproducts of metabolism of certain bacteria, and also unintentionally in man-made contexts, such as the products of corrosion of metal objects.
Structure and formulas
LDHs can be seen as derived from hydroxides of divalent cations (d) with the brucite (Mg(OH)2) layer structure [AdB AdB]n, by cation (c) replacement (Mg2+ → Al3+), or by cation oxidation (Fe2+ → Fe3+ in the case of green rust, Fe(OH)2), in the metallic divalent (d) cation layers, so as to give them an excess positive electric charge; and intercalation of extra anion layers (Z) between the hydroxide layers (A,B) to neutralize that charge, resulting in the structure [AcB Z AcB]n. LDHs can be formed with a wide variety of anions in the intercalated layers (Z), such as Cl−, Br−, NO, CO, SO and SeO.
This structure is unusual in solid-state chemistry, since many materials with similar structure (such as montmorillonite and other clay minerals) have negatively charged main metal layers (c) and positive ions in the intercalated layers (Z).
In the most studied class of LDHs, the positive layer (c) consists of divalent and trivalent cations, and can be represented by the generic formula:
[()2]x+ [(Xn−)x/n · y]x–,
where Xn− is the intercalating anion compensating the excess of positive charge (x+) present in the metal hydroxide layer.
Most commonly, = Ca2+, Mg2+, Mn2+, Fe2+, Co2+, Ni2+, Cu2+ or Zn2+, and is another trivalent cation (Al3+, Cr3+), or possibly of the same element as in the case of green rust with Fe3+. Fixed-composition phases have been shown to exist over the range 0.2 ≤ x ≤ 0.33. However, phases with variable x hare also known, and in some cases, x > 0.5.
Another class of Li/Al LDH is known where the main metal layer (c) consists of Li+ and Al3+ cations in a molar ratio Li:Al = 1:2, so that the metal hydroxide layer only bears one unit of positive charge in excess, with the generic formula:
.
In some cases, the pH value of the solution used during the synthesis and the high drying temperature of the LDH can eliminate the presence of the OH− groups in the LDH. For example, in the synthesis of the (BiO)4(OH)2CO3 compound, a low pH value of the aqueous solution or higher annealing temperature of the solid can induce the formation of (BiO)2CO3, which is thermodynamically more stable than the LDH compound, by exchanging OH− groups by CO32– groups.
Applications
The anions located in the interlayer regions can be replaced easily, in general. A wide variety of anions may be incorporated, ranging from simple inorganic anions (e.g. CO) through organic anions (e.g. benzoate, succinate) to complex biomolecules, including DNA. This has led to an intense interest in the use of LDH intercalates for advanced applications. Drug molecules such as ibuprofen may be intercalated; the resulting nanocomposites have potential for use in controlled release systems, which could reduce the frequency of doses of medication needed to treat a disorder. Further effort has been expended on the intercalation of agrochemicals, such as the chlorophenoxyacetates, and important organic synthons, such as terephthalate and nitrophenols. Agrochemical intercalates are of interest because of the potential to use LDHs to remove agrochemicals from polluted water, reducing the likelihood of eutrophication.
LDHs exhibit shape-selective intercalation properties. For instance, treating LiAl2-Cl with a 50:50 mixture of terephthalate (1,4-benzenedicarboxylate) and phthalate (1,2-benzenedicarboxylate) results in intercalation of the 1,4-isomer with almost 100% preference. The selective intercalation of ions such as benzenedicarboxylates and nitrophenols has importance because these are produced in isomeric mixtures from crude oil residues, and it is often desirable to isolate a single form, for instance in the production of polymers.
LDH-TiO2 intercalates are used in suspensions for self-cleaning of surfaces (especially for materials in cultural heritage), because of photo-catalytic properties of TiO2 and good compatibility of LDHs with inorganic materials.
Minerals
Naturally occurring (i.e., mineralogical) examples of LDH are classified as members of the hydrotalcite supergroup, named after the Mg-Al carbonate hydrotalcite, which is the longest-known example of a natural LDH phase. More than 40 mineral species are known to fall within this supergroup. The dominant divalent cations, M2+, that have been reported in hydrotalcite supergroup minerals are: Mg, Ca, Mn, Fe, Ni, Cu and Zn; the dominant trivalent cations, M3+, are: Al, Mn, Fe, Co and Ni. The most common intercalated anions are [CO3]2−, [SO4]2− and Cl−; OH−, S2− and [Sb(OH)6]− have also been reported. Some species contain intercalated cationic or neutral complexes such as [Na(H2O)6]+ or [MgSO4]0. The International Mineralogical Association's 2012 report on hydrotalcite supergroup nomenclature defines eight groups within the supergroup on the basis of a combination of criteria. These groups are:
the hydrotalcite group, with M2+:M3+ = 3:1 (layer spacing ~7.8 Å);
the quintinite group, with M2+:M3+ = 2:1 (layer spacing ~7.8 Å);
the fougèrite group of natural 'green rust' phases, with M2+ = Fe2+, M3+ = Fe3+ in a range of ratios, and with O2− replacing OH− in the brucite module to maintain charge balance (layer spacing ~7.8 Å);
the woodwardite group, with variable M2+:M3+ and interlayer [SO4]2−, leading to an expanded layer spacing of ~8.9 Å;
the cualstibite group, with interlayer [Sb(OH)6]− and a layer spacing of ~9.7 Å;
the glaucocerinite group, with interlayer [SO4]2− as in the woodwardite group, and with additional interlayer H2O molecules that further expand the layer spacing to ~11 Å;
the wermlandite group, with a layer spacing of ~11 Å, in which cationic complexes occur with anions between the brucite-like layers; and
the hydrocalumite group, with M2+ = Ca2+ and M3+ = Al3+, which contains brucite-like layers in which the Ca:Al ratio is 2:1 and the large cation, Ca2+, is coordinated to a seventh ligand of 'interlayer' water.
The IMA Report also presents a concise systematic nomenclature for synthetic LDH phases that are not eligible for a mineral name. This uses the prefix LDH, and characterises components by the numbers of the octahedral cation species in the chemical formula, the interlayer anion, and the Ramsdell polytype symbol (number of layers in the repeat of the structure, and crystal system). For example, the 3R polytype of Mg6Al2(OH)12(CO3).4H2O (hydrotalcite sensu stricto) is described by "LDH 6Mg2Al·CO3-3R". This simplified nomenclature does not capture all the possible types of structural complexity in LDH materials. Elsewhere, the Report discusses examples of:
long-range order of different cations within a brucite-like layer, which may produce sharp superstructure peaks in diffraction patterns and a and b periodicities that are multiples of the basic 3 Å repeat, or short-range order producing diffuse scattering;
the wide variety of c periodicities that can occur due to relative displacements or rotations of the brucite-like layers, producing multiple polytypes with the same compositions, intergrowths of polytypes and variable degrees of stacking disorder;
different periodicities arising from order of different interlayer species, either within an interlayer or by alternation of different anion types from interlayer to interlayer.
See also
Maalox, magnesium-aluminium oxide used as antacid
References
External links
LDH, DNA and Hydrothermal Vents – Science Daily
Mindat entry for Hydrotalcite Supergroup
Materials
Minerals
Hydroxides
Antacids | Layered double hydroxides | [
"Physics"
] | 2,108 | [
"Materials",
"Bases (chemistry)",
"Hydroxides",
"Matter"
] |
12,055,125 | https://en.wikipedia.org/wiki/Bombieri%20norm | In mathematics, the Bombieri norm, named after Enrico Bombieri, is a norm on homogeneous polynomials with coefficient in or (there is also a version for non homogeneous univariate polynomials). This norm has many remarkable properties, the most important being listed in this article.
Bombieri scalar product for homogeneous polynomials
To start with the geometry, the Bombieri scalar product for homogeneous polynomials with N variables can be defined as follows using multi-index notation:
by definition different monomials are orthogonal, so that
if
while by definition
In the above definition and in the rest of this article the following notation applies:
if write and and
Bombieri inequality
The fundamental property of this norm is the Bombieri inequality:
let be two homogeneous polynomials respectively of degree and with variables, then, the following inequality holds:
Here the Bombieri inequality is the left hand side of the above statement, while the right side means that the Bombieri norm is an algebra norm. Giving the left hand side is meaningless without that constraint, because in this case, we can achieve the same result with any norm by multiplying the norm by a well chosen factor.
This multiplicative inequality implies that the product of two polynomials is bounded from below by a quantity that depends on the multiplicand polynomials. Thus, this product can not be arbitrarily small. This multiplicative inequality is useful in metric algebraic geometry and number theory.
Invariance by isometry
Another important property is that the Bombieri norm is invariant by composition with an
isometry:
let be two homogeneous polynomials of degree with variables and let be an isometry
of (or ). Then we have . When this implies .
This result follows from a nice integral formulation of the scalar product:
where is the unit sphere of with its canonical measure .
Other inequalities
Let be a homogeneous polynomial of degree with variables and let . We have:
where denotes the Euclidean norm.
The Bombieri norm is useful in polynomial factorization, where it has some advantages over the Mahler measure, according to Knuth (Exercises 20-21, pages 457-458 and 682-684).
See also
Grassmann manifold
Hardy space
Homogeneous polynomial
Plücker embedding
References
Norms (mathematics)
Analytic number theory
Polynomials
Homogeneous polynomials
Complex analysis
Several complex variables | Bombieri norm | [
"Mathematics"
] | 467 | [
"Functions and mappings",
"Mathematical analysis",
"Analytic number theory",
"Several complex variables",
"Polynomials",
"Mathematical objects",
"Number theory",
"Mathematical relations",
"Norms (mathematics)",
"Algebra"
] |
176,622 | https://en.wikipedia.org/wiki/Degrees%20of%20freedom | In many scientific fields, the degrees of freedom of a system is the number of parameters of the system that may vary independently. For example, a point in the plane has two degrees of freedom for translation: its two coordinates; a non-infinitesimal object on the plane might have additional degrees of freedoms related to its orientation.
In mathematics, this notion is formalized as the dimension of a manifold or an algebraic variety. When degrees of freedom is used instead of dimension, this usually means that the manifold or variety that models the system is only implicitly defined.
See:
Degrees of freedom (mechanics), number of independent motions that are allowed to the body or, in case of a mechanism made of several bodies, number of possible independent relative motions between the pieces of the mechanism
Degrees of freedom (physics and chemistry), a term used in explaining dependence on parameters, or the dimensions of a phase space
Degrees of freedom (statistics), the number of values in the final calculation of a statistic that are free to vary
Degrees of freedom problem, the problem of controlling motor movement given abundant degrees of freedom
See also
Six degrees of freedom
Dimension
Broad-concept articles | Degrees of freedom | [
"Physics"
] | 234 | [
"Geometric measurement",
"Dimension",
"Physical quantities",
"Theory of relativity"
] |
176,695 | https://en.wikipedia.org/wiki/Audio%20feedback | Audio feedback (also known as acoustic feedback, simply as feedback) is a positive feedback situation that may occur when an acoustic path exists between an audio output (for example, a loudspeaker) and its audio input (for example, a microphone or guitar pickup). In this example, a signal received by the microphone is amplified and passed out of the loudspeaker. The sound from the loudspeaker can then be received by the microphone again, amplified further, and then passed out through the loudspeaker again. The frequency of the resulting howl is determined by resonance frequencies in the microphone, amplifier, and loudspeaker, the acoustics of the room, the directional pick-up and emission patterns of the microphone and loudspeaker, and the distance between them. The principles of audio feedback were first discovered by Danish scientist Søren Absalon Larsen, hence it is also known as the Larsen effect.
Feedback is almost always considered undesirable when it occurs with a singer's or public speaker's microphone at an event using a sound reinforcement system or PA system. Audio engineers typically use directional microphones with cardioid pickup patterns and various electronic devices, such as equalizers and, since the 1990s, automatic feedback suppressors, to prevent feedback, which detracts from the audience's enjoyment of the event and may damage equipment or hearing.
Since the 1960s, electric guitar players in rock music bands using loud guitar amplifiers, speaker cabinets and distortion effects have intentionally created guitar feedback to create different sounds including long sustained tones that cannot be produced using standard playing techniques. The sound of guitar feedback is considered to be a desirable musical effect in heavy metal music, hardcore punk and grunge. Jimi Hendrix was an innovator in the intentional use of guitar feedback in his guitar solos to create unique musical sounds.
History and theory
The conditions for feedback follow the Barkhausen stability criterion, namely that, with sufficiently high gain, a stable oscillation can (and usually will) occur in a feedback loop whose frequency is such that the phase delay is an integer multiple of 360 degrees and the gain at that frequency is equal to 1. If the small-signal gain is greater than 1 for some frequency then the system will start to oscillate at that frequency because noise at that frequency will be amplified. Sound will be produced without anyone actually playing. The sound level will increase until the output starts clipping, reducing the loop gain to exactly unity. This is the principle upon which electronic oscillators are based; in that case, although the feedback loop is purely electronic, the principle is the same. If the gain is large but slightly less than 1, then ringing will be introduced, but only when at least some input sound is already being sent through the system.
Early academic work on acoustical feedback was done by Dr. C. Paul Boner. Boner was responsible for establishing basic theories of acoustic feedback, room-ring modes, and room-sound system equalizing techniques. Boner reasoned that when feedback happened, it did so at one precise frequency. He also reasoned that it could be stopped by inserting a very narrow notch filter at that frequency in the loudspeaker's signal chain. He worked with Gifford White, founder of White Instruments to hand craft notch filters for specific feedback frequencies in specific rooms.
Distance
To maximize gain before feedback, the amount of sound energy that is fed back to the microphones must be reduced as much as is practical. As sound pressure falls off with 1/r with respect to the distance r in free space, or up to a distance known as reverberation distance in closed spaces (and the energy density with 1/r²), it is important to keep the microphones at a large enough distance from the speaker systems. As well, microphones should not be positioned in front of speakers and individuals using mics should be asked to avoid pointing the microphone at speaker enclosures.
Directivity
Additionally, the loudspeakers and microphones should have non-uniform directivity and should stay out of the maximum sensitivity of each other, ideally in a direction of cancellation. Public address speakers often achieve directivity in the mid and treble region (and good efficiency) via horn systems. Sometimes the woofers have a cardioid characteristic.
Professional setups circumvent feedback by placing the main speakers away from the band or artist, and then having several smaller speakers known as monitors pointing back at each band member, but in the opposite direction to that in which the microphones are pointing taking advantage of microphones with a cardioid pickup pattern which are common in sound reinforcement applications. This configuration reduces the opportunities for feedback and allows independent control of the sound pressure levels for the audience and the performers.
Frequency response
Almost always, the natural frequency response of a sound reinforcement systems is not ideally flat as this leads to acoustical feedback at the frequency with the highest loop gain, which may be a resonance with much higher than the average gain over all frequencies. It is therefore helpful to apply some form of equalization to reduce the gain at this frequency.
Feedback can be reduced manually by ringing out a sound system prior to a performance. The sound engineer can increase the level of a microphone until feedback occurs. The engineer can then attenuate the relevant frequency on an equalizer preventing feedback at that frequency but allowing sufficient volume at other frequencies. Many professional sound engineers can identify feedback frequencies by ear but others use a real-time analyzer to identify the ringing frequency.
To avoid feedback, automatic feedback suppressor can be used. Some of these work by shifting the frequency slightly, with this upshift resulting in a chirp-sound instead of a howling sound of unaddressed feedback. Other devices use sharp notch filters to filter out offending frequencies. Adaptive algorithms are often used to automatically tune these notch filters.
Deliberate uses
To intentionally create feedback, an electric guitar player needs a guitar amplifier with very high gain (amplification) or the guitar brought near the speaker. The guitarist then allows the strings to vibrate freely and brings the guitar close to the loudspeaker of the guitar amp. The use of distortion effects units adds additional gain and facilitates the creation of intentional feedback.
Early examples in popular music
A deliberate use of acoustic feedback was pioneered by blues and rock and roll guitarists such as Willie Johnson, Johnny Watson and Link Wray. According to AllMusic's Richie Unterberger, the very first use of feedback on a commercial rock record is the introduction of the song "I Feel Fine" by the Beatles, recorded in 1964. Jay Hodgson agrees that this feedback created by John Lennon leaning a semi-acoustic guitar against an amplifier was the first chart-topper to showcase feedback distortion. The Who's 1965 hits "Anyway, Anyhow, Anywhere" and "My Generation" featured feedback manipulation by Pete Townshend, with an extended solo in the former and the shaking of his guitar in front of the amplifier to create a throbbing noise in the latter. Canned Heat's "Fried Hockey Boogie" also featured guitar feedback produced by Henry Vestine during his solo to create a highly amplified distorted boogie style of feedback. In 1963, the teenage Brian May and his father custom-built his signature guitar Red Special, which was purposely designed to feed back.
Feedback was used extensively after 1965 by the Monks, Jefferson Airplane, the Velvet Underground and the Grateful Dead, who included in many of their live shows a segment named Feedback, a several-minute long feedback-driven improvisation. Feedback has since become a striking characteristic of rock music, as electric guitar players such as Jeff Beck, Pete Townshend, Dave Davies, Steve Marriott and Jimi Hendrix deliberately induced feedback by holding their guitars close to the amplifier's speaker. An example of feedback can be heard on Hendrix's performance of "Can You See Me?" at the Monterey Pop Festival. The entire guitar solo was created using amplifier feedback. Jazz guitarist Gábor Szabó was one of the earliest jazz musicians to use controlled feedback in his music, which is prominent on his live album The Sorcerer (1967). Szabó's method included the use of a flat-top acoustic guitar with a magnetic pickup. Lou Reed created his album Metal Machine Music (1975) entirely from loops of feedback played at various speeds.
Introductions, transitions, and fade-outs
In addition to "I Feel Fine", feedback was used on the introduction to songs including Jimi Hendrix's "Foxy Lady", the Beatles' "It's All Too Much", Hendrix's "Crosstown Traffic", David Bowie's "Little Wonder", the Strokes's "New York City Cops", Ben Folds Five's "Fair", Midnight Juggernauts's "Road to Recovery", Nirvana's "Radio Friendly Unit Shifter", the Jesus and Mary Chain's "Tumbledown" and "Catchfire", the Stone Roses's "Waterfall", Porno for Pyros's "Tahitian Moon", Tool's "Stinkfist", and the Cure's "Prayer For Rain". Examples of feedback combined with a quick volume swell used as a transition include Weezer's "My Name Is Jonas" and "Say It Ain't So"; The Strokes' "Reptilia", "New York City Cops", and "Juicebox"; Dream Theater's "As I Am"; as well as numerous tracks by Meshuggah and Tool.
Cacophonous feedback fade-outs ending a song are most often used to generate rather than relieve tension, often cross-faded too after a thematic and musical release. Examples include Modwheelmood's remix of Nine Inch Nail's "The Great Destroyer"; and the Jesus and Mary Chain's "Teenage Lust", "Tumbledown", "Catchfire", "Sundown", and "Frequency".
Examples in modern classical music
Though closed circuit feedback was a prominent feature in many early experimental electronic music compositions, intentional acoustic feedback as sound material gained more prominence with compositions such as John Cage's Variations II (1961) performed by David Tudor and Robert Ashley's The Wolfman (1964). Steve Reich makes extensive use of audio feedback in his work Pendulum Music (1968) by swinging a series of microphones back and forth in front of their corresponding amplifiers. Hugh Davies and Alvin Lucier both use feedback in their works. Roland Kayn based much of his compositional oeuvre, which he termed "cybernetic music," on audio systems incorporating feedback. More recent examples can be found in the work of, for example, Lara Stanic, Paul Craenen, Anne Wellmer, Adam Basanta, Lesley Flanigan, Ronald Boersen and Erfan Abdi.
Pitched feedback
Pitched melodies may be created entirely from feedback by changing the angle between a guitar and amplifier after establishing a feedback loop. Examples include Tool's "Jambi", Robert Fripp's guitar on David Bowie's "Heroes" (album version), and Jimi Hendrix's "Third Stone from the Sun" and his live performance of "Wild Thing" at the Monterey Pop Festival.
Regarding Fripp's work on "Heroes":
Contemporary uses
Audio feedback became a signature feature of many underground rock bands during the 1980s. American noise-rockers Sonic Youth melded the rock-feedback tradition with a compositional and classical approach (notably covering Reich's "Pendulum Music"), and guitarist/producer Steve Albini's group Big Black also worked controlled feedback into the makeup of their songs. With the alternative rock movement of the 1990s, feedback again saw a surge in popular usage by suddenly mainstream acts like Nirvana, the Red Hot Chili Peppers, Rage Against the Machine and the Smashing Pumpkins. The use of the "no-input-mixer" method for sound generation by feeding a mixing console back into itself has been adopted in experimental electronic and noise music by practitioners such as Toshimaru Nakamura.
Devices
The principle of feedback is used in many guitar sustain devices. Examples include handheld devices like the EBow, built-in guitar pickups that increase the instrument's sonic sustain, and sonic transducers mounted on the head of a guitar. Intended closed-circuit feedback can also be created by an effects unit, such as a delay pedal or effect fed back into a mixing console. The feedback can be controlled by using the fader to determine a volume level. The Boss DF-2 Super Feedbacker and Distortion pedal is an electronic effect unit that helps electric guitarists create feedback effects. The halldorophone is an electro-acoustic string instrument specifically made to work with string based feedback.
See also
Circuit bending
Comb filter
Echo suppression and cancellation
Video feedback
References
External links
Audio effects
Audio electronics
Rock music
Guitar performance techniques
Feedback
he:היזון חוזר (מוזיקה) | Audio feedback | [
"Engineering"
] | 2,638 | [
"Audio electronics",
"Audio engineering"
] |
177,089 | https://en.wikipedia.org/wiki/Atomic%20energy | Atomic energy or energy of atoms is energy carried by atoms. The term originated in 1903 when Ernest Rutherford began to speak of the possibility of atomic energy. H. G. Wells popularized the phrase "splitting the atom", before discovery of the atomic nucleus.
Atomic energy includes:
Nuclear binding energy, the energy required to split a nucleus of an atom.
Nuclear potential energy, the potential energy of the particles inside an atomic nucleus.
Nuclear reaction, a process in which nuclei or nuclear particles interact, resulting in products different from the initial ones; see also nuclear fission and nuclear fusion.
Radioactive decay, the set of various processes by which unstable atomic nuclei (nuclides) emit subatomic particles.
Atomic energy is the source of nuclear power, which uses sustained nuclear fission to generate heat and electricity. It is also the source of the explosive force of an atomic bomb.
See also
Atomic Age
Index of environmental articles
References
Forms of energy
Nuclear energy | Atomic energy | [
"Physics",
"Chemistry"
] | 190 | [
"Physical quantities",
"Forms of energy",
"Energy (physics)",
"Nuclear energy",
"Nuclear physics",
"Radioactivity"
] |
177,320 | https://en.wikipedia.org/wiki/Spectral%20line | A spectral line is a weaker or stronger region in an otherwise uniform and continuous spectrum. It may result from emission or absorption of light in a narrow frequency range, compared with the nearby frequencies. Spectral lines are often used to identify atoms and molecules. These "fingerprints" can be compared to the previously collected ones of atoms and molecules, and are thus used to identify the atomic and molecular components of stars and planets, which would otherwise be impossible.
Types of line spectra
Spectral lines are the result of interaction between a quantum system (usually atoms, but sometimes molecules or atomic nuclei) and a single photon. When a photon has about the right amount of energy (which is connected to its frequency) to allow a change in the energy state of the system (in the case of an atom this is usually an electron changing orbitals), the photon is absorbed. Then the energy will be spontaneously re-emitted, either as one photon at the same frequency as the original one or in a cascade, where the sum of the energies of the photons emitted will be equal to the energy of the one absorbed (assuming the system returns to its original state).
A spectral line may be observed either as an emission line or an absorption line. Which type of line is observed depends on the type of material and its temperature relative to another emission source. An absorption line is produced when photons from a hot, broad spectrum source pass through a cooler material. The intensity of light, over a narrow frequency range, is reduced due to absorption by the material and re-emission in random directions. By contrast, a bright emission line is produced when photons from a hot material are detected, perhaps in the presence of a broad spectrum from a cooler source. The intensity of light, over a narrow frequency range, is increased due to emission by the hot material.
Spectral lines are highly atom-specific, and can be used to identify the chemical composition of any medium. Several elements, including helium, thallium, and caesium, were discovered by spectroscopic means. Spectral lines also depend on the temperature and density of the material, so they are widely used to determine the physical conditions of stars and other celestial bodies that cannot be analyzed by other means.
Depending on the material and its physical conditions, the energy of the involved photons can vary widely, with the spectral lines observed across the electromagnetic spectrum, from radio waves to gamma rays.
Nomenclature
Strong spectral lines in the visible part of the electromagnetic spectrum often have a unique Fraunhofer line designation, such as K for a line at 393.366 nm emerging from singly-ionized calcium atom, Ca+, though some of the Fraunhofer "lines" are blends of multiple lines from several different species.
In other cases, the lines are designated according to the level of ionization by adding a Roman numeral to the designation of the chemical element. Neutral atoms are denoted with the Roman numeral I, singly ionized atoms with II, and so on, so that, for example:
Cu II copper ion with +1 charge, Cu1+
Fe III iron ion with +2 charge, Fe2+
More detailed designations usually include the line wavelength and may include a multiplet number (for atomic lines) or band designation (for molecular lines). Many spectral lines of atomic hydrogen also have designations within their respective series, such as the Lyman series or Balmer series. Originally all spectral lines were classified into series: the principal series, sharp series, and diffuse series. These series exist across atoms of all elements, and the patterns for all atoms are well-predicted by the Rydberg-Ritz formula. These series were later associated with suborbitals.
Line broadening and shift
There are a number of effects which control spectral line shape. A spectral line extends over a tiny spectral band with a nonzero range of frequencies, not a single frequency (i.e., a nonzero spectral width). In addition, its center may be shifted from its nominal central wavelength. There are several reasons for this broadening and shift. These reasons may be divided into two general categories – broadening due to local conditions and broadening due to extended conditions. Broadening due to local conditions is due to effects which hold in a small region around the emitting element, usually small enough to assure local thermodynamic equilibrium. Broadening due to extended conditions may result from changes to the spectral distribution of the radiation as it traverses its path to the observer. It also may result from the combining of radiation from a number of regions which are far from each other.
Broadening due to local effects
Natural broadening
The lifetime of excited states results in natural broadening, also known as lifetime broadening. The uncertainty principle relates the lifetime of an excited state (due to spontaneous radiative decay or the Auger process) with the uncertainty of its energy.
Some authors use the term "radiative broadening" to refer specifically to the part of natural broadening caused by the spontaneous radiative decay.
A short lifetime will have a large energy uncertainty and a broad emission. This broadening effect results in an unshifted Lorentzian profile. The natural broadening can be experimentally altered only to the extent that decay rates can be artificially suppressed or enhanced.
Thermal Doppler broadening
The atoms in a gas which are emitting radiation will have a distribution of velocities. Each photon emitted will be "red"- or "blue"-shifted by the Doppler effect depending on the velocity of the atom relative to the observer. The higher the temperature of the gas, the wider the distribution of velocities in the gas. Since the spectral line is a combination of all of the emitted radiation, the higher the temperature of the gas, the broader the spectral line emitted from that gas. This broadening effect is described by a Gaussian profile and there is no associated shift.
Pressure broadening
The presence of nearby particles will affect the radiation emitted by an individual particle. There are two limiting cases by which this occurs:
Impact pressure broadening or collisional broadening: The collision of other particles with the light emitting particle interrupts the emission process, and by shortening the characteristic time for the process, increases the uncertainty in the energy emitted (as occurs in natural broadening). The duration of the collision is much shorter than the lifetime of the emission process. This effect depends on both the density and the temperature of the gas. The broadening effect is described by a Lorentzian profile and there may be an associated shift.
Quasistatic pressure broadening: The presence of other particles shifts the energy levels in the emitting particle (see spectral band), thereby altering the frequency of the emitted radiation. The duration of the influence is much longer than the lifetime of the emission process. This effect depends on the density of the gas, but is rather insensitive to temperature. The form of the line profile is determined by the functional form of the perturbing force with respect to distance from the perturbing particle. There may also be a shift in the line center. The general expression for the lineshape resulting from quasistatic pressure broadening is a 4-parameter generalization of the Gaussian distribution known as a stable distribution.
Pressure broadening may also be classified by the nature of the perturbing force as follows:
Linear Stark broadening occurs via the linear Stark effect, which results from the interaction of an emitter with an electric field of a charged particle at a distance , causing a shift in energy that is linear in the field strength.
Resonance broadening occurs when the perturbing particle is of the same type as the emitting particle, which introduces the possibility of an energy exchange process.
Quadratic Stark broadening occurs via the quadratic Stark effect, which results from the interaction of an emitter with an electric field, causing a shift in energy that is quadratic in the field strength.
Van der Waals broadening occurs when the emitting particle is being perturbed by Van der Waals forces. For the quasistatic case, a Van der Waals profile is often useful in describing the profile. The energy shift as a function of distance between the interacting particles is given in the wings by e.g. the Lennard-Jones potential.
Inhomogeneous broadening
Inhomogeneous broadening is a general term for broadening because some emitting particles are in a different local environment from others, and therefore emit at a different frequency. This term is used especially for solids, where surfaces, grain boundaries, and stoichiometry variations can create a variety of local environments for a given atom to occupy. In liquids, the effects of inhomogeneous broadening is sometimes reduced by a process called motional narrowing.
Broadening due to non-local effects
Certain types of broadening are the result of conditions over a large region of space rather than simply upon conditions that are local to the emitting particle.
Opacity broadening
Opacity broadening is an example of a non-local broadening mechanism. Electromagnetic radiation emitted at a particular point in space can be reabsorbed as it travels through space. This absorption depends on wavelength. The line is broadened because the photons at the line center have a greater reabsorption probability than the photons at the line wings. Indeed, the reabsorption near the line center may be so great as to cause a self reversal in which the intensity at the center of the line is less than in the wings. This process is also sometimes called self-absorption.
Macroscopic Doppler broadening
Radiation emitted by a moving source is subject to Doppler shift due to a finite line-of-sight velocity projection. If different parts of the emitting body have different velocities (along the line of sight), the resulting line will be broadened, with the line width proportional to the width of the velocity distribution. For example, radiation emitted from a distant rotating body, such as a star, will be broadened due to the line-of-sight variations in velocity on opposite sides of the star (this effect usually referred to as rotational broadening). The greater the rate of rotation, the broader the line. Another example is an imploding plasma shell in a Z-pinch.
Combined effects
Each of these mechanisms can act in isolation or in combination with others. Assuming each effect is independent, the observed line profile is a convolution of the line profiles of each mechanism. For example, a combination of the thermal Doppler broadening and the impact pressure broadening yields a Voigt profile.
However, the different line broadening mechanisms are not always independent. For example, the collisional effects and the motional Doppler shifts can act in a coherent manner, resulting under some conditions even in a collisional narrowing, known as the Dicke effect.
Spectral lines of chemical elements
Bands
The phrase "spectral lines", when not qualified, usually refers to lines having wavelengths in the visible band of the full electromagnetic spectrum. Many spectral lines occur at wavelengths outside this range. At shorter wavelengths, which correspond to higher energies, ultraviolet spectral lines include the Lyman series of hydrogen. At the much shorter wavelengths of X-rays, the lines are known as characteristic X-rays because they remain largely unchanged for a given chemical element, independent of their chemical environment. Longer wavelengths correspond to lower energies, where the infrared spectral lines include the Paschen series of hydrogen. At even longer wavelengths, the radio spectrum includes the 21-cm line used to detect neutral hydrogen throughout the cosmos.
See also
Absorption spectrum
Atomic spectral line
Bohr model
Electron configuration
Emission spectrum
Fourier transform
Fraunhofer line
Table of emission spectra of gas discharge lamps
Hydrogen line (21-cm line)
Hydrogen spectral series
Spectral band
Spectroscopy
Splatalogue
Notes
References
Further reading
Spectroscopy
Spectrum (physical sciences) | Spectral line | [
"Physics",
"Chemistry"
] | 2,429 | [
"Physical phenomena",
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Waves",
"Spectroscopy"
] |
178,220 | https://en.wikipedia.org/wiki/ATP%20synthase | ATP synthase is an enzyme that catalyzes the formation of the energy storage molecule adenosine triphosphate (ATP) using adenosine diphosphate (ADP) and inorganic phosphate (Pi). ATP synthase is a molecular machine. The overall reaction catalyzed by ATP synthase is:
ADP + Pi + 2H+out ATP + H2O + 2H+in
ATP synthase lies across a cellular membrane and forms an aperture that protons can cross from areas of high concentration to areas of low concentration, imparting energy for the synthesis of ATP. This electrochemical gradient is generated by the electron transport chain and allows cells to store energy in ATP for later use. In prokaryotic cells ATP synthase lies across the plasma membrane, while in eukaryotic cells it lies across the inner mitochondrial membrane. Organisms capable of photosynthesis also have ATP synthase across the thylakoid membrane, which in plants is located in the chloroplast and in cyanobacteria is located in the cytoplasm.
Eukaryotic ATP synthases are F-ATPases (which usually work as ATP synthases instead of ATPases in cellular environments) and running "in reverse" for an ATPase (ATPase catalyze the decomposition of ATP into ADP and a free phosphate ion). This article deals mainly with this type. An F-ATPase consists of two main subunits, FO and F1, which has a rotational motor mechanism allowing for ATP production.
Nomenclature
The F1 fraction derives its name from the term "Fraction 1" and FO (written as a subscript letter "o", not "zero") derives its name from being the binding fraction for oligomycin, a type of naturally derived antibiotic that is able to inhibit the FO unit of ATP synthase. These functional regions consist of different protein subunits — refer to tables. This enzyme is used in synthesis of ATP through aerobic respiration.
Structure and function
Located within the thylakoid membrane and the inner mitochondrial membrane, ATP synthase consists of two regions FO and F1. FO causes rotation of F1 and is made of c-ring and subunits a, two b, F6. F1 is made of α, β, γ, and δ subunits. F1 has a water-soluble part that can hydrolyze ATP. FO on the other hand has mainly hydrophobic regions. FO F1 creates a pathway for protons movement across the membrane.
F1 region
The F1 portion of ATP synthase is hydrophilic and responsible for hydrolyzing ATP. The F1 unit protrudes into the mitochondrial matrix space. Subunits α and β make a hexamer with 6 binding sites. Three of them are catalytically inactive and they bind ADP.
Three other subunits catalyze the ATP synthesis. The other F1 subunits γ, δ, and ε are a part of a rotational motor mechanism (rotor/axle). The γ subunit allows β to go through conformational changes (i.e., closed, half open, and open states) that allow for ATP to be bound and released once synthesized. The F1 particle is large and can be seen in the transmission electron microscope by negative staining. These are particles of 9 nm diameter that pepper the inner mitochondrial membrane.
FO region
FO is a water insoluble protein with eight subunits and a transmembrane ring. The ring has a tetrameric shape with a helix-loop-helix protein that goes through conformational changes when protonated and deprotonated, pushing neighboring subunits to rotate, causing the spinning of FO which then also affects conformation of F1, resulting in switching of states of alpha and beta subunits. The FO region of ATP synthase is a proton pore that is embedded in the mitochondrial membrane. It consists of three main subunits, a, b, and c. Six c subunits make up the rotor ring, and subunit b makes up a stalk connecting to F1 OSCP that prevents the αβ hexamer from rotating. Subunit a connects b to the c ring. Humans have six additional subunits, d, e, f, g, F6, and 8 (or A6L). This part of the enzyme is located in the mitochondrial inner membrane and couples proton translocation to the rotation that causes ATP synthesis in the F1 region.
In eukaryotes, mitochondrial FO forms membrane-bending dimers. These dimers self-arrange into long rows at the end of the cristae, possibly the first step of cristae formation. An atomic model for the dimeric yeast FO region was determined by cryo-EM at an overall resolution of 3.6 Å.
Binding model
In the 1960s through the 1970s, Paul Boyer, a UCLA Professor, developed the binding change, or flip-flop, mechanism theory, which postulated that ATP synthesis is dependent on a conformational change in ATP synthase generated by rotation of the gamma subunit. The research group of John E. Walker, then at the MRC Laboratory of Molecular Biology in Cambridge, crystallized the F1 catalytic-domain of ATP synthase. The structure, at the time the largest asymmetric protein structure known, indicated that Boyer's rotary-catalysis model was, in essence, correct. For elucidating this, Boyer and Walker shared half of the 1997 Nobel Prize in Chemistry.
The crystal structure of the F1 showed alternating alpha and beta subunits (3 of each), arranged like segments of an orange around a rotating asymmetrical gamma subunit. According to the current model of ATP synthesis (known as the alternating catalytic model), the transmembrane potential created by (H+) proton cations supplied by the electron transport chain, drives the (H+) proton cations from the intermembrane space through the membrane via the FO region of ATP synthase. A portion of the FO (the ring of c-subunits) rotates as the protons pass through the membrane. The c-ring is tightly attached to the asymmetric central stalk (consisting primarily of the gamma subunit), causing it to rotate within the alpha3beta3 of F1 causing the 3 catalytic nucleotide binding sites to go through a series of conformational changes that lead to ATP synthesis. The major F1 subunits are prevented from rotating in sympathy with the central stalk rotor by a peripheral stalk that joins the alpha3beta3 to the non-rotating portion of FO. The structure of the intact ATP synthase is currently known at low-resolution from electron cryo-microscopy (cryo-EM) studies of the complex. The cryo-EM model of ATP synthase suggests that the peripheral stalk is a flexible structure that wraps around the complex as it joins F1 to FO. Under the right conditions, the enzyme reaction can also be carried out in reverse, with ATP hydrolysis driving proton pumping across the membrane.
The binding change mechanism involves the active site of a β subunit's cycling between three states. In the "loose" state, ADP and phosphate enter the active site; in the adjacent diagram, this is shown in pink. The enzyme then undergoes a change in shape and forces these molecules together, with the active site in the resulting "tight" state (shown in red) binding the newly produced ATP molecule with very high affinity. Finally, the active site cycles back to the open state (orange), releasing ATP and binding more ADP and phosphate, ready for the next cycle of ATP production.
Physiological role
Like other enzymes, the activity of F1FO ATP synthase is reversible. Large-enough quantities of ATP cause it to create a transmembrane proton gradient, this is used by fermenting bacteria that do not have an electron transport chain, but rather hydrolyze ATP to make a proton gradient, which they use to drive flagella and the transport of nutrients into the cell.
In respiring bacteria under physiological conditions, ATP synthase, in general, runs in the opposite direction, creating ATP while using the proton motive force created by the electron transport chain as a source of energy. The overall process of creating energy in this fashion is termed oxidative phosphorylation.
The same process takes place in the mitochondria, where ATP synthase is located in the inner mitochondrial membrane and the F1-part projects into the mitochondrial matrix. By pumping proton cations into the matrix, the ATP-synthase converts ADP into ATP.
Evolution
The evolution of ATP synthase is thought to have been modular whereby two functionally independent subunits became associated and gained new functionality. This association appears to have occurred early in evolutionary history, because essentially the same structure and activity of ATP synthase enzymes are present in all kingdoms of life. The F-ATP synthase displays high functional and mechanistic similarity to the V-ATPase. However, whereas the F-ATP synthase generates ATP by utilising a proton gradient, the V-ATPase generates a proton gradient at the expense of ATP, generating pH values of as low as 1.
The F1 region also shows significant similarity to hexameric DNA helicases (especially the Rho factor), and the entire enzyme region shows some similarity to -powered T3SS or flagellar motor complexes. The α3β3 hexamer of the F1 region shows significant structural similarity to hexameric DNA helicases; both form a ring with 3-fold rotational symmetry with a central pore. Both have roles dependent on the relative rotation of a macromolecule within the pore; the DNA helicases use the helical shape of DNA to drive their motion along the DNA molecule and to detect supercoiling, whereas the α3β3 hexamer uses the conformational changes through the rotation of the γ subunit to drive an enzymatic reaction.
The motor of the FO particle shows great functional similarity to the motors that drive flagella. Both feature a ring of many small alpha-helical proteins that rotate relative to nearby stationary proteins, using a potential gradient as an energy source. This link is tenuous, however, as the overall structure of flagellar motors is far more complex than that of the FO particle and the ring with about 30 rotating proteins is far larger than the 10, 11, or 14 helical proteins in the FO complex. More recent structural data do however show that the ring and the stalk are structurally similar to the F1 particle.
The modular evolution theory for the origin of ATP synthase suggests that two subunits with independent function, a DNA helicase with ATPase activity and a motor, were able to bind, and the rotation of the motor drove the ATPase activity of the helicase in reverse. This complex then evolved greater efficiency and eventually developed into today's intricate ATP synthases. Alternatively, the DNA helicase/ motor complex may have had pump activity with the ATPase activity of the helicase driving the motor in reverse. This may have evolved to carry out the reverse reaction and act as an ATP synthase.
Inhibitors
A variety of natural and synthetic inhibitors of ATP synthase have been discovered. These have been used to probe the structure and mechanism of ATP synthase. Some may be of therapeutic use. There are several classes of ATP synthase inhibitors, including peptide inhibitors, polyphenolic phytochemicals, polyketides, organotin compounds, polyenic α-pyrone derivatives, cationic inhibitors, substrate analogs, amino acid modifiers, and other miscellaneous chemicals. Some of the most commonly used ATP synthase inhibitors are oligomycin and DCCD.
In different organisms
Bacteria
ATP synthase is the simplest known form of ATP synthase, with 8 different subunit types.
Bacterial F-ATPases can occasionally operate in reverse, turning them into an ATPase. Some bacteria have no F-ATPase, using an A/V-type ATPase bidirectionally.
Yeast
Yeast ATP synthase is one of the best-studied eukaryotic ATP synthases; and five F1, eight FO subunits, and seven associated proteins have been identified. Most of these proteins have homologues in other eukaryotes.
Plant
In plants, ATP synthase is also present in chloroplasts (CF1FO-ATP synthase). The enzyme is integrated into thylakoid membrane; the CF1-part sticks into stroma, where dark reactions of photosynthesis (also called the light-independent reactions or the Calvin cycle) and ATP synthesis take place. The overall structure and the catalytic mechanism of the chloroplast ATP synthase are almost the same as those of the bacterial enzyme. However, in chloroplasts, the proton motive force is generated not by respiratory electron transport chain but by primary photosynthetic proteins. The synthase has a 40-aa insert in the gamma-subunit to inhibit wasteful activity when dark.
Mammal
The ATP synthase isolated from bovine (Bos taurus) heart mitochondria is, in terms of biochemistry and structure, the best-characterized ATP synthase. Beef heart is used as a source for the enzyme because of the high concentration of mitochondria in cardiac muscle. Their genes have close homology to human ATP synthases.
Human genes that encode components of ATP synthases:
ATP5A1
ATP5B
ATP5C1, ATP5D, ATP5E, ATP5F1, ATP5MC1, ATP5G2, ATP5G3, ATP5H, ATP5I, ATP5J, ATP5J2, ATP5L, ATP5O
MT-ATP6, MT-ATP8
Other eukaryotes
Eukaryotes belonging to some divergent lineages have very special organizations of the ATP synthase. A euglenozoa ATP synthase forms a dimer with a boomerang-shaped F1 head like other mitochondrial ATP synthases, but the FO subcomplex has many unique subunits. It uses cardiolipin. The inhibitory IF1 also binds differently, in a way shared with trypanosomatida.
Archaea
Archaea do not generally have an F-ATPase. Instead, they synthesize ATP using the A-ATPase/synthase, a rotary machine structurally similar to the V-ATPase but mainly functioning as an ATP synthase. Like the bacteria F-ATPase, it is believed to also function as an ATPase.
LUCA and earlier
F-ATPase gene linkage and gene order are widely conserved across ancient prokaryote lineages, implying that this system already existed at a date before the last universal common ancestor, the LUCA.
See also
ATP10 protein required for the assembly of the FO sector of the mitochondrial ATPase complex.
Chloroplast
Electron transfer chain
Flavoprotein
Mitochondrion
Oxidative phosphorylation
P-ATPase
Proton pump
Rotating locomotion in living systems
Transmembrane ATPase
V-ATPase
References
Further reading
Nick Lane: The Vital Question: Energy, Evolution, and the Origins of Complex Life, Ww Norton, 2015-07-20, (Link points to Figure 10 showing model of ATP synthase)
External links
Boris A. Feniouk: "ATP synthase — a splendid molecular machine"
Well illustrated ATP synthase lecture by Antony Crofts of the University of Illinois at Urbana–Champaign.
Proton and Sodium translocating F-type, V-type and A-type ATPases in OPM database
The Nobel Prize in Chemistry 1997 to Paul D. Boyer and John E. Walker for the enzymatic mechanism of synthesis of ATP; and to Jens C. Skou, for discovery of an ion-transporting enzyme, , -ATPase.
Harvard Multimedia Production Site — Videos – ATP synthesis animation
David Goodsell: "ATP Synthase- Molecule of the Month"
Enzymes
Cellular respiration
Photosynthesis
EC 3.6.3
Integral membrane proteins
Protein complexes | ATP synthase | [
"Chemistry",
"Biology"
] | 3,332 | [
"Biochemistry",
"Cellular respiration",
"Metabolism",
"Photosynthesis"
] |
178,282 | https://en.wikipedia.org/wiki/Geostationary%20transfer%20orbit | In space mission design, a geostationary transfer orbit (GTO) or geosynchronous transfer orbit is a highly elliptical type of geocentric orbit, usually with a perigee as low as low Earth orbit (LEO) and an apogee as high as geostationary orbit (GEO). Satellites that are destined for geosynchronous orbit (GSO) or GEO are often put into a GTO as an intermediate step for reaching their final orbit. Manufacturers of launch vehicles often advertise the amount of payload the vehicle can put into GTO.
Background
Geostationary and geosynchronous orbits are very desirable for many communication and Earth observation satellites. However, the delta-v, and therefore financial, cost to send a spacecraft to such orbits is very high due to their high orbital radius. A GTO is an intermediary orbit used to make this process more efficient. Satellite operators often use a high-thrust, low-efficiency launch vehicle to put their satellite into GTO, and then, after detaching the launch vehicle, use low-thrust, high-efficiency thrusters onboard the satellite itself to circularize its orbit (to GEO) over a longer period of time. This process is called spiral-out. This mission architecture is useful because it minimizes the mass that the spacecraft must push to GEO, allows for maximally efficient circularization burns taking advantage of the Oberth effect, and allows the spent launch vehicle to deorbit primarily through aerobraking due to its low perigee, minimizing its orbital lifetime.
Technical description
GTO is a highly elliptical Earth orbit with an apogee (the point in the orbit of the moon or a satellite at which it is furthest from the earth) of , or a height of above sea level, which corresponds to the geostationary altitude. The period of a standard geosynchronous transfer orbit is about 10.5 hours. The argument of perigee is such that apogee occurs on or near the equator. Perigee can be anywhere above the atmosphere, but is usually restricted to a few hundred kilometers above the Earth's surface to reduce launcher delta-V () requirements and to limit the orbital lifetime of the spent booster so as to curtail space junk.
If using low-thrust engines such as electrical propulsion to get from the transfer orbit to geostationary orbit, the transfer orbit can be supersynchronous (having an apogee above the final geosynchronous orbit). However, this method takes much longer to achieve due to the low thrust injected into the orbit.
The typical launch vehicle injects the satellite to a supersynchronous orbit having the apogee above 42,164 km. The satellite's low-thrust engines are thrusted continuously around the geostationary transfer orbits. The thrust direction and magnitude are usually determined to optimize the transfer time and/or duration while satisfying the mission constraints. The out-of-plane component of thrust is used to reduce the initial inclination set by the initial transfer orbit, while the in-plane component simultaneously raises the perigee and lowers the apogee of the intermediate geostationary transfer orbit. In case of using the Hohmann transfer orbit, only a few days are required to reach the geosynchronous orbit. By using low-thrust engines or electrical propulsion, months are required until the satellite reaches its final orbit.
The orbital inclination of a GTO is the angle between the orbit plane and the Earth's equatorial plane. It is determined by the latitude of the launch site and the launch azimuth (direction). The inclination and eccentricity must both be reduced to zero to obtain a geostationary orbit. If only the eccentricity of the orbit is reduced to zero, the result may be a geosynchronous orbit but will not be geostationary. Because the required for a plane change is proportional to the instantaneous velocity, the inclination and eccentricity are usually changed together in a single maneuver at apogee, where velocity is lowest.
The required for an inclination change at either the ascending or descending node of the orbit is calculated as follows:
For a typical GTO with a semi-major axis of 24,582 km, perigee velocity is 9.88 km/s and apogee velocity is 1.64 km/s, clearly making the inclination change far less costly at apogee. In practice, the inclination change is combined with the orbital circularization (or "apogee kick") burn to reduce the total for the two maneuvers. The combined is the vector sum of the inclination change and the circularization , and as the sum of the lengths of two sides of a triangle will always exceed the remaining side's length, total in a combined maneuver will always be less than in two maneuvers. The combined can be calculated as follows:
where is the velocity magnitude at the apogee of the transfer orbit and is the velocity in GEO.
Other considerations
Even at apogee, the fuel needed to reduce inclination to zero can be significant, giving equatorial launch sites a substantial advantage over those at higher latitudes. Russia's Baikonur Cosmodrome in Kazakhstan is at 46° north latitude. Kennedy Space Center in the United States is at 28.5° north. China's Wenchang is at 19.5° north. India's SDSC is at 13.7° north. Guiana Space Centre, the European Ariane and European-operated Russian Soyuz launch facility, is at 5° north. The "indefinitely suspended" Sea Launch launched from a floating platform directly on the equator in the Pacific Ocean.
Expendable launchers generally reach GTO directly, but a spacecraft already in a low Earth orbit (LEO) can enter GTO by firing a rocket along its orbital direction to increase its velocity. This was done when geostationary spacecraft were launched from the Space Shuttle; a "perigee kick motor" attached to the spacecraft ignited after the shuttle had released it and withdrawn to a safe distance.
Although some launchers can take their payloads all the way to geostationary orbit, most end their missions by releasing their payloads into GTO. The spacecraft and its operator are then responsible for the maneuver into the final geostationary orbit. The 5-hour coast to first apogee can be longer than the battery lifetime of the launcher or spacecraft, and the maneuver is sometimes performed at a later apogee or split among multiple apogees. The solar power available on the spacecraft supports the mission after launcher separation. Also, many launchers now carry several satellites in each launch to reduce overall costs, and this practice simplifies the mission when the payloads may be destined for different orbital positions.
Because of this practice, launcher capacity is usually quoted as spacecraft mass to GTO, and this number will be higher than the payload that could be delivered directly into GEO.
For example, the capacity (adapter and spacecraft mass) of the Delta IV Heavy is 14,200 kg to GTO, or 6,750 kg directly to geostationary orbit.
If the maneuver from GTO to GEO is to be performed with a single impulse, as with a single solid-rocket motor, apogee must occur at an equatorial crossing and at synchronous orbit altitude. This implies an argument of perigee of either 0° or 180°. Because the argument of perigee is slowly perturbed by the oblateness of the Earth, it is usually biased at launch so that it reaches the desired value at the appropriate time (for example, this is usually the sixth apogee on Ariane 5 launches). If the GTO inclination is zero, as with Sea Launch, then this does not apply. (It also would not apply to an impractical GTO inclined at 63.4°; see Molniya orbit.)
The preceding discussion has primarily focused on the case where the transfer between LEO and GEO is done with a single intermediate transfer orbit. More complicated trajectories are sometimes used. For example, the Proton-M uses a set of three intermediate orbits, requiring five upper-stage rocket firings, to place a satellite into GEO from the high-inclination site of Baikonur Cosmodrome, in Kazakhstan. Because of Baikonur's high latitude and range safety considerations that block launches directly east, it requires less delta-v to transfer satellites to GEO by using a supersynchronous transfer orbit where the apogee (and the maneuver to reduce the transfer orbit inclination) are at a higher altitude than 35,786 km, the geosynchronous altitude. Proton even offers to perform a supersynchronous apogee maneuver up to 15 hours after launch.
The geostationary orbit is a special type of orbit around the Earth in which a satellite orbits the planet at the same rate as the Earth's rotation. This means that the satellite appears to remain stationary relative to a fixed point on the Earth's surface. The geostationary orbit is located at an altitude of approximately 35,786 kilometers (22,236 miles) above the Earth's equator.
See also
Astrodynamics
Low Earth orbit
List of orbits
Aeronautics
References
Astrodynamics
Earth orbits | Geostationary transfer orbit | [
"Engineering"
] | 1,909 | [
"Astrodynamics",
"Aerospace engineering"
] |
9,575,341 | https://en.wikipedia.org/wiki/The%20Art%20of%20the%20Metaobject%20Protocol | The Art of the Metaobject Protocol (AMOP) is a 1991 book by Gregor Kiczales, Jim des Rivieres, and Daniel G. Bobrow (all three working for Xerox PARC) on the subject of metaobject protocol.
Overview
The book contains an explanation of what a metaobject protocol is, why it is desirable, and the de facto standard for the metaobject protocol supported by many Common Lisp implementations as an extension of the Common Lisp Object System, or CLOS. A more complete and portable implementation of CLOS and the metaobject protocol, as defined in this book, was provided by Xerox PARC as Portable Common Loops.
The book presents a simplified CLOS implementation for Common Lisp called "Closette", which for the sake of pedagogical brevity does not include some of the more complex or exotic CLOS features such as forward-referencing of superclasses, full class and method redefinitions, advanced user-defined method combinations, and complete integration of CLOS classes with Common Lisp's type system. It also lacks support for compilation and most error checking, since the purpose of Closette is not actual use, but simply to demonstrate the fundamental power and expressive flexibility of metaobject protocols as an application of the principles of the meta-circular evaluator.
In his 1997 talk at OOPSLA, Alan Kay called it "the best book anybody's written in ten years", and contended that it contained "some of the most profound insights, and the most practical insights about OOP", but was dismayed that it was written in a highly Lisp-centric and CLOS-specific fashion, calling it "a hard book for most people to read; if you don't know the Lisp culture, it's very hard to read".
References
Computer books
Lisp (programming language)
Object (computer science) | The Art of the Metaobject Protocol | [
"Technology"
] | 398 | [
"Works about computing",
"Computer books"
] |
9,578,494 | https://en.wikipedia.org/wiki/High-speed%20flight | In high-speed flight, the assumptions of incompressibility of the air used in low-speed aerodynamics no longer apply. In subsonic aerodynamics, the theory of lift is based upon the forces generated on a body and a moving gas (air) in which it is immersed. At airspeeds below about , air can be considered incompressible in regards to an aircraft, in that, at a fixed altitude, its density remains nearly constant while its pressure varies. Under this assumption, air acts the same as water and is classified as a fluid.
Subsonic aerodynamic theory also assumes the effects of viscosity (the property of a fluid that tends to prevent motion of one part of the fluid with respect to another) are negligible, and classifies air as an ideal fluid, conforming to the principles of ideal-fluid aerodynamics such as continuity, Bernoulli's principle, and circulation. In reality, air is compressible and viscous. While the effects of these properties are negligible at low speeds, compressibility effects in particular become increasingly important as airspeed increases. Compressibility (and to a lesser extent viscosity) is of paramount importance at speeds approaching the speed of sound. In these transonic speed ranges, compressibility causes a change in the density of the air around an airplane.
During flight, a wing produces lift by accelerating the airflow over the upper surface. This accelerated air can, and does, reach supersonic speeds, even though the airplane itself may be flying at a subsonic airspeed (Mach number < 1.0). At some extreme angles of attack, in some airplanes, the speed of the air over the top surface of the wing may be double the airplane's airspeed. It is, therefore, entirely possible to have both supersonic and subsonic airflows on an airplane at the same time. When flow velocities reach sonic speeds at some locations on an airplane (such as the area of maximum camber on the wing), further acceleration will result in the onset of compressibility effects such as shock wave formation, drag increase, buffeting, stability, and control difficulties. Subsonic flow principles are
invalid at all speeds above this point.
See also
Coffin corner (aerodynamics)
Critical Mach number
Drag divergence Mach number
References
Sources
Airspeed | High-speed flight | [
"Physics"
] | 482 | [
"Wikipedia categories named after physical quantities",
"Airspeed",
"Physical quantities"
] |
9,579,143 | https://en.wikipedia.org/wiki/Aromatic%20amino%20acid | An aromatic amino acid is an amino acid that includes an aromatic ring.
Among the 20 standard amino acids, histidine, phenylalanine, tryptophan, tyrosine, are classified as aromatic.
Properties and function
Optical properties
Aromatic amino acids, excepting histidine, absorb ultraviolet light above and beyond 250 nm and will fluoresce under these conditions. This characteristic is used in quantitative analysis, notably in determining the concentrations of these amino acids in solution. Most proteins absorb at 280 nm due to the presence of tyrosine and tryptophan. Of the aromatic amino acids, tryptophan has the highest extinction coefficient; its absorption maximum occurs at 280 nm. The absorption maximum of tyrosine occurs at 274 nm.
Role in protein structure and function
Aromatic amino acids stabilize folded structures of many proteins. Aromatic residues are found predominantly sequestered within the cores of globular proteins, although often comprise key portions of protein-protein or protein-ligand interaction interfaces on the protein surface.
Aromatic amino acids as precursors
Aromatic amino acids often serve as the precursors to important biochemicals.
Histidine is the precursor to histamine.
Tryptophan is the precursor to 5-hydroxytryptophan and then serotonin, tryptamine, auxin, kynurenines, and melatonin.
Tyrosine is the precursor to L-DOPA, dopamine, norepinephrine (noradrenaline), epinephrine (adrenaline), and the thyroid hormone thyroxine. It is also precursor to octopamine and melanin in numerous organisms.
Phenylalanine is the precursor to tyrosine.
Biosynthesis
Shikimate pathway
In plants, the shikimate pathway first leads to the formation of chorismate, which is the precursor of phenylalanine, tyrosine, and tryptophan. These aromatic amino acids are the precursors of many secondary metabolites, all essential to a plant's biological functions, such as the hormones salicylate and auxin. This pathway contains enzymes that can be regulated by inhibitors, which can cease the production of chorismate, and ultimately the organism's biological functions. Herbicides and antibiotics work by inhibiting these enzymes involved in the biosynthesis of aromatic amino acids, thereby rendering them toxic to plants. Glyphosate, a type of herbicide, is used to control the accumulation of excess greens. In addition to destroying greens, Glyphosate can easily affect the maintenance of the gut microbiota in host organisms by specifically inhibiting the 5-enolpyruvylshikimate-3-phosphate synthase which prevents the biosynthesis of essential aromatic amino acids. Inhibition of this enzyme results in disorders such as gastrointestinal diseases and metabolic diseases.
Nutritional requirements
Animals obtain aromatic amino acids from their diet, but nearly all plants and some micro-organisms must synthesize their aromatic amino acids through the metabolically costly shikimate pathway in order to make them. Histidine, phenylalanine, tryptophan, are essential amino acids for animals. Since they are not synthesized in the human body, they must be derived from the diet. Tyrosine is semi-essential; therefore, it can be synthesized by the animal, but only from phenylalanine. Phenylketonuria, a genetic disorder that occurs as a result of the inability to breakdown phenylalanine, is due to a lack of the enzyme phenylalanine hydroxylase. A dietary lack of tryptophan can cause stunted skeletal development. Excessive intake of aromatic amino acids far beyond levels obtained through normal protein consumption might lead to hypertension, something which could go un-noticed for a long time in healthy individuals. It could be caused by other factors as well such as the use of various herbs and foods like chocolate which inhibit monoamine oxidase enzymes to varying degrees, and also some medications. Aromatic trace amines like tyramine can displace norepinephrine from peripheral monoamine vesicles and in people taking monoamine oxidase inhibitors (MAOIs) this occurs to the extent of being life threatening. Blue diaper syndrome is an autosomal recessive disease that is caused by poor tryptophan absorption in the body.
See also
Aromatic L-amino acid decarboxylase
Expanded genetic code
Phenylketonuria
Tyrosine hydroxylase
Neurotransmitter
Notes
References
Further reading
External links
Amino acids
| Aromatic amino acid | [
"Chemistry"
] | 951 | [
"Amino acids",
"Biomolecules by chemical classification"
] |
9,579,270 | https://en.wikipedia.org/wiki/Biological%20systems%20engineering | Biological systems engineering or biosystems engineering is a broad-based engineering discipline with particular emphasis on non-medical biology. It can be thought of as a subset of the broader notion of biological engineering or bio-technology though not in the respects that pertain to biomedical engineering as biosystems engineering tends to focus less on medical applications than on agriculture, ecosystems, and food science. The discipline focuses broadly on environmentally sound and sustainable engineering solutions to meet societies' ecologically related needs. Biosystems engineering integrates the expertise of fundamental engineering fields with expertise from non-engineering disciplines.
Background and organization
Many college and university biological engineering departments have a history of being grounded in agricultural engineering and have only in the past two decades or so changed their names to reflect the movement towards more diverse biological based engineering programs. This major is sometimes called agricultural and biological engineering, biological and environmental engineering, etc., in different universities, generally reflecting interests of local employment opportunities.
Since biological engineering covers a wide spectrum, many departments now offer specialization options. Depending on the department and the specialization options offered within each program, curricula may overlap with other related fields. There are a number of different titles for BSE-related departments at various universities. The professional societies commonly associated with many Biological Engineering programs include the American Society of Agricultural and Biological Engineers (ASABE) and the Institute of Biological Engineering (IBE), which generally encompasses BSE. Some program also participate in the Biomedical Engineering Society (BMES) and the American Institute of Chemical Engineers (AIChE).
A biological systems engineer has a background in what both environmental engineers and biologists do, thus bridging the gap between engineering and the (non-medical) biological sciences – although this is variable across academic institutions. For this reason, biological systems engineers are becoming integral parts of many environmental engineering firms, federal agencies, and biotechnology industries. A biological systems engineer will often address the solution to a problem from the perspective of employing living systems to enact change. For example, biological treatment methodologies can be applied to provide access to clean drinking water or for sequestration of carbon dioxide.
Specializations
Land and water resources engineering
Food engineering and bioprocess engineering
Machinery systems engineering
Natural resources and environmental engineering
Biomedical engineering
Academic programs in agricultural and biological systems engineering
Below is a listing of known academic programs that offer bachelor's degrees (B.S. or B.S.E.) in what ABET and/or ASABE terms "agricultural engineering", "biological systems engineering", "biological engineering", or similarly named programs. ABET accredits college and university programs in the disciplines of applied science, computing, engineering, and engineering technology. ASABE defines accredited programs within the scope of Ag/Bio Engineering.
North America
Central and South America
Europe
Asia
Africa
See also
Related engineering fields
Agricultural engineering
Aquaculture engineering
Biological engineering
Biomedical engineering
Civil engineering
Chemical engineering
Ecological engineering
Environmental engineering
Food engineering
Hydraulic engineering
Mechanical engineering
Sanitary engineering
Closely related sciences
Agriculture
Animal Science
Biology, Biochemistry, Microbiology
Chemistry
Ecology
Environmental science
Forestry
Horticulture
Hydrology
Plant Science
Soil science
References
Further reading
2003, Dennis R. Heldman (ed), Encyclopedia of agricultural, food, and biological engineering.
2002, Teruyuki Nagamune, Tai Hyun Park & Mark R. Marten (ed), Biological Systems Engineering, Washington, D.C. : American Chemical Society, 320 pages.
2012, Paige Brown Jarreau, What is Biological Engineering, http://www.scilogs.com/from_the_lab_bench/what-is-biological-engineering-ibe-2012/
External links
UC San Diego, Department of Bioengineering, UCSD BE part of University of California, San Diego
Biological engineering
Biological systems
Systems biology
Systems engineering | Biological systems engineering | [
"Engineering",
"Biology"
] | 762 | [
"Systems engineering",
"Biological engineering",
"nan",
"Systems biology"
] |
9,579,379 | https://en.wikipedia.org/wiki/Base%20%28geometry%29 | In geometry, a base is a side of a polygon or a face of a polyhedron, particularly one oriented perpendicular to the direction in which height is measured, or on what is considered to be the "bottom" of the figure. This term is commonly applied in plane geometry to triangles, parallelograms, trapezoids, and in solid geometry to cylinders, cones, pyramids, parallelepipeds, prisms, and frustums.
The side or point opposite the base is often called the apex or summit of the shape.
Of a triangle
In a triangle, any arbitrary side can be considered the base. The two endpoints of the base are called base vertices and the corresponding angles are called base angles. The third vertex opposite the base is called the apex.
The extended base of a triangle (a particular case of an extended side) is the line that contains the base. When the triangle is obtuse and the base is chosen to be one of the sides adjacent to the obtuse angle, then the altitude dropped perpendicularly from the apex to the base intersects the extended base outside of the triangle.
The area of a triangle is its half of the product of the base times the height (length of the altitude). For a triangle with opposite sides if the three altitudes of the triangle are called the area is:
Given a fixed base side and a fixed area for a triangle, the locus of apex points is a straight line parallel to the base.
Of a trapezoid or parallelogram
Any of the sides of a parallelogram, or either (but typically the longer) of the parallel sides of a trapezoid can be considered its base. Sometimes the parallel opposite side is also called a base, or sometimes it is called a top, apex, or summit. The other two edges can be called the sides.
Role in area and volume calculation
Bases are commonly used (together with heights) to calculate the areas and volumes of figures. In speaking about these processes, the measure (length or area) of a figure's base is often referred to as its "base."
By this usage, the area of a parallelogram or the volume of a prism or cylinder can be calculated by multiplying its "base" by its height; likewise, the areas of triangles and the volumes of cones and pyramids are fractions of the products of their bases and heights. Some figures have two parallel bases (such as trapezoids and frustums), both of which are used to calculate the extent of the figures.
References
Parts of a triangle
Area
Volume | Base (geometry) | [
"Physics",
"Mathematics"
] | 527 | [
"Scalar physical quantities",
"Physical quantities",
"Quantity",
"Size",
"Extensive quantities",
"Volume",
"Wikipedia categories named after physical quantities",
"Area"
] |
9,581,197 | https://en.wikipedia.org/wiki/Quark%E2%80%93lepton%20complementarity | The quark–lepton complementarity (QLC) is a possible fundamental symmetry between quarks and leptons. First proposed in 1990 by Foot and Lew, it assumes that leptons as well as quarks come in three "colors". Such theory may reproduce the Standard Model at low energies, and hence quark–lepton symmetry may be realized in nature.
Possible evidence for QLC
Recent neutrino experiments confirm that the Pontecorvo–Maki–Nakagawa–Sakata matrix contains large mixing angles. For example, atmospheric measurements of particle decay yield ≈ 45°, while solar experiments yield ≈ 34°. Compare these results with ≈ 9° which is clearly smaller, at about ~× the size,
and with the quark mixing angles in the Cabibbo–Kobayashi–Maskawa matrix . The disparity that nature indicates between quark and lepton mixing angles has been viewed in terms of a "quark–lepton complementarity" which can be expressed in the relations
Possible consequences of QLC have been investigated in the literature and in particular a simple correspondence between the PMNS and CKM matrices have been proposed and analyzed in terms of a correlation matrix. The correlation matrix is roughly
defined as the product of the CKM and PMNS matrices:
Unitarity implies:
Open questions
One may ask where the large lepton mixings come from, and whether this information is implicit in the form of the matrix. This question has been widely investigated in the literature, but its answer is still open. Furthermore, in some Grand Unification Theories (GUTs) the direct QLC correlation between the CKM and the PMNS mixing matrix can be obtained. In this class of models, the matrix is determined by the heavy Majorana neutrino mass matrix.
Despite the naïve relations between the PMNS and CKM angles, a detailed analysis shows that the correlation matrix is phenomenologically compatible with a tribimaximal pattern, and only marginally with a bimaximal pattern. It is possible to include bimaximal forms of the correlation matrix in models with renormalization effects that are relevant, however, only in particular cases with and with quasi-degenerate neutrino masses.
See also
Leptoquark
Footnotes
References
Leptons
Quarks
Standard Model | Quark–lepton complementarity | [
"Physics"
] | 478 | [
"Standard Model",
"Particle physics"
] |
9,585,894 | https://en.wikipedia.org/wiki/Kohn%20anomaly | A Kohn anomaly or the Kohn effect is an anomaly in the dispersion relation of a phonon branch in a metal. The anomaly is named for Walter Kohn, who first proposed it in 1959.
Description
In condensed matter physics, a Kohn anomaly (also called the Kohn effect) is an anomaly in the dispersion relation of a phonon branch in a metal.
For a specific wavevector, the frequency (and thus the energy) of the associated phonon is considerably lowered, and there is a discontinuity in its derivative. In extreme cases (that can happen in low-dimensional materials), the energy of this phonon is zero, meaning that a static distortion of the lattice appears. This is one explanation for charge density waves in solids. The wavevectors at which a Kohn anomaly is possible are the nesting vectors of the Fermi surface, that is vectors that connect a lot of points of the Fermi surface (for a one-dimensional chain of atoms or a spherical Fermi surface this vector would be ). The electron phonon interaction causes a rigid shift of the Fermi sphere and a failure of the Born-Oppenheimer approximation since the electrons do not follow any more the ionic motion adiabatically.
In the phonon spectrum of a metal, a Kohn anomaly is a discontinuity in the derivative of the dispersion relation that is produced by the abrupt change in the screening of lattice vibrations by conduction electrons. It can occur at any point in the Brillouin Zone because ) is unrelated to crystal symmetry. In one dimension, it is equivalent to a Peierls instability, and it is similar to the Jahn-Teller effect seen in molecular systems.
Kohn anomalies arise together with Friedel oscillations when one considers the Lindhard theory instead of the Thomas–Fermi approximation in order to find an expression for the dielectric function of a homogeneous electron gas. The expression for the real part of the reciprocal space dielectric function obtained following the Lindhard theory includes a logarithmic term that is singular at , where is the Fermi wavevector. Although this singularity is quite small in reciprocal space, if one takes the Fourier transform and passes into real space, the Gibbs phenomenon causes a strong oscillation of in the proximity of the singularity mentioned above. In the context of phonon dispersion relations, these oscillations appear as a vertical tangent in the plot of , called the Kohn anomalies.
Many different systems exhibit Kohn anomalies, including graphene, bulk metals, and many low-dimensional systems (the reason involves the condition , which depends on the topology of the Fermi surface). However, it is important to emphasize that only materials showing metallic behaviour can exhibit a Kohn anomaly, since the model emerges from a homogeneous electron gas approximation.
History
The anomaly is named for Walter Kohn. They have been first proposed by Walter Kohn in 1959.
See also
Zero sound
Pomeranchuk instability
References
Condensed matter physics | Kohn anomaly | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 636 | [
"Phases of matter",
"Condensed matter physics",
"Matter",
"Materials science"
] |
1,043,263 | https://en.wikipedia.org/wiki/Excitotoxicity | In excitotoxicity, nerve cells suffer damage or death when the levels of otherwise necessary and safe neurotransmitters such as glutamate become pathologically high, resulting in excessive stimulation of receptors. For example, when glutamate receptors such as the NMDA receptor or AMPA receptor encounter excessive levels of the excitatory neurotransmitter, glutamate, significant neuronal damage might ensue. Excess glutamate allows high levels of calcium ions (Ca2+) to enter the cell. Ca2+ influx into cells activates a number of enzymes, including phospholipases, endonucleases, and proteases such as calpain. These enzymes go on to damage cell structures such as components of the cytoskeleton, membrane, and DNA. In evolved, complex adaptive systems such as biological life it must be understood that mechanisms are rarely, if ever, simplistically direct. For example, NMDA, in subtoxic amounts, can block glutamate toxicity and thereby induce neuronal survival.
Excitotoxicity may be involved in cancers, spinal cord injury, stroke, traumatic brain injury, hearing loss (through noise overexposure or ototoxicity), and in neurodegenerative diseases of the central nervous system such as multiple sclerosis, Alzheimer's disease, amyotrophic lateral sclerosis (ALS), Parkinson's disease, alcoholism, alcohol withdrawal or hyperammonemia and especially over-rapid benzodiazepine withdrawal, and also Huntington's disease. Other common conditions that cause excessive glutamate concentrations around neurons are hypoglycemia. Blood sugars are the primary glutamate removal method from inter-synaptic spaces at the NMDA and AMPA receptor site. Persons in excitotoxic shock must never fall into hypoglycemia. Patients should be given 5% glucose (dextrose) IV drip during excitotoxic shock to avoid a dangerous build up of glutamate around NMDA and AMPA neurons. When 5% glucose (dextrose) IV drip is not available high levels of fructose are given orally. Treatment is administered during the acute stages of excitotoxic shock along with glutamate antagonists. Dehydration should be avoided as this also contributes to the concentrations of glutamate in the inter-synaptic cleft and "status epilepticus can also be triggered by a build up of glutamate around inter-synaptic neurons."
History
The harmful effects of glutamate on the central nervous system were first observed in 1954 by T. Hayashi, a Japanese scientist who stated that direct application of glutamate caused seizure activity, though this report went unnoticed for several years. D. R. Lucas and J. P. Newhouse, after noting that "single doses of [20–30 grams of sodium glutamate in humans] have ... been administered intravenously without permanent ill-effects", observed in 1957 that a subcutaneous dose described as "a little less than lethal", destroyed the neurons in the inner layers of the retina in newborn mice. In 1969, John Olney discovered that the phenomenon was not restricted to the retina, but occurred throughout the brain, and coined the term excitotoxicity. He also assessed that cell death was restricted to postsynaptic neurons, that glutamate agonists were as neurotoxic as their efficiency to activate glutamate receptors, and that glutamate antagonists could stop the neurotoxicity.
In 2002, Hilmar Bading and co-workers found that excitotoxicity is caused by the activation of NMDA receptors located outside synaptic contacts. The molecular basis for toxic extrasynaptic NMDA receptor signaling was uncovered in 2020 when Hilmar Bading and co-workers described a death signaling complex that consists of extrasynaptic NMDA receptor and TRPM4. Disruption of this complex using NMDAR/TRPM4 interface inhibitors (also known as ‚interface inhibitors‘) renders extrasynaptic NMDA receptor non-toxic.
Pathophysiology
Excitotoxicity can occur from substances produced within the body (endogenous excitotoxins). Glutamate is a prime example of an excitotoxin in the brain, and it is also the major excitatory neurotransmitter in the central nervous system of mammals. During normal conditions, glutamate concentration can be increased up to 1mM in the synaptic cleft, which is rapidly decreased in the lapse of milliseconds. When the glutamate concentration around the synaptic cleft cannot be decreased or reaches higher levels, the neuron kills itself by a process called apoptosis.
This pathologic phenomenon can also occur after brain injury and spinal cord injury. Within minutes after spinal cord injury, damaged neural cells within the lesion site spill glutamate into the extracellular space where glutamate can stimulate presynaptic glutamate receptors to enhance the release of additional glutamate. Brain trauma or stroke can cause ischemia, in which blood flow is reduced to inadequate levels. Ischemia is followed by accumulation of glutamate and aspartate in the extracellular fluid, causing cell death, which is aggravated by lack of oxygen and glucose. The biochemical cascade resulting from ischemia and involving excitotoxicity is called the ischemic cascade. Because of the events resulting from ischemia and glutamate receptor activation, a deep chemical coma may be induced in patients with brain injury to reduce the metabolic rate of the brain (its need for oxygen and glucose) and save energy to be used to remove glutamate actively. (The main aim in induced comas is to reduce the intracranial pressure, not brain metabolism).
Increased extracellular glutamate levels leads to the activation of Ca2+ permeable NMDA receptors on myelin sheaths and oligodendrocytes, leaving oligodendrocytes susceptible to Ca2+ influxes and subsequent excitotoxicity. One of the damaging results of excess calcium in the cytosol is initiating apoptosis through cleaved caspase processing. Another damaging result of excess calcium in the cytosol is the opening of the mitochondrial permeability transition pore, a pore in the membranes of mitochondria that opens when the organelles absorb too much calcium. Opening of the pore may cause mitochondria to swell and release reactive oxygen species and other proteins that can lead to apoptosis. The pore can also cause mitochondria to release more calcium. In addition, production of adenosine triphosphate (ATP) may be stopped, and ATP synthase may in fact begin hydrolysing ATP instead of producing it, which is suggested to be involved in depression.
Inadequate ATP production resulting from brain trauma can eliminate electrochemical gradients of certain ions. Glutamate transporters require the maintenance of these ion gradients to remove glutamate from the extracellular space. The loss of ion gradients results in not only the halting of glutamate uptake, but also in the reversal of the transporters. The Na+-glutamate transporters on neurons and astrocytes can reverse their glutamate transport and start secreting glutamate at a concentration capable of inducing excitotoxicity. This results in a buildup of glutamate and further damaging activation of glutamate receptors.
On the molecular level, calcium influx is not the only factor responsible for apoptosis induced by excitoxicity. Recently, it has been noted that extrasynaptic NMDA receptor activation, triggered by both glutamate exposure or hypoxic/ischemic conditions, activate a CREB (cAMP response element binding) protein shut-off, which in turn caused loss of mitochondrial membrane potential and apoptosis. On the other hand, activation of synaptic NMDA receptors activated only the CREB pathway, which activates BDNF (brain-derived neurotrophic factor), not activating apoptosis.
Exogenous excitotoxins
Exogenous excitotoxins refer to neurotoxins that also act at postsynaptic cells but are not normally found in the body. These toxins may enter the body of an organism from the environment through wounds, food intake, aerial dispersion etc. Common excitotoxins include glutamate analogs that mimic the action of glutamate at glutamate receptors, including AMPA and NMDA receptors.
BMAA
The L-alanine derivative β-methylamino-L-alanine (BMAA) has long been identified as a neurotoxin which was first associated with the amyotrophic lateral sclerosis/parkinsonism–dementia complex (Lytico-bodig disease) in the Chamorro people of Guam. The widespread occurrence of BMAA can be attributed to cyanobacteria which produce BMAA as a result of complex reactions under nitrogen stress. Following research, excitotoxicity appears to be the likely mode of action for BMAA which acts as a glutamate agonist, activating AMPA and NMDA receptors and causing damage to cells even at relatively low concentrations of 10 μM. The subsequent uncontrolled influx of Ca2+ then leads to the pathophysiology described above. Further evidence of the role of BMAA as an excitotoxin is rooted in the ability of NMDA antagonists like MK801 to block the action of BMAA. More recently, evidence has been found that BMAA is misincorporated in place of L-serine in human proteins. A considerable portion of the research relating to the toxicity of BMAA has been conducted on rodents. A study published in 2016 with vervets (Chlorocebus sabaeus) in St. Kitts, which are homozygous for the apoE4 (APOE-ε4) allele (a condition which in humans is a risk factor for Alzheimer's disease), found that vervets orally administered BMAA developed hallmark histopathology features of Alzheimer's Disease including amyloid beta plaques and neurofibrillary tangle accumulation. Vervets in the trial fed smaller doses of BMAA were found to have correlative decreases in these pathology features. This study demonstrates that BMAA, an environmental toxin, can trigger neurodegenerative disease as a result of a gene/environment interaction. While BMAA has been detected in brain tissue of deceased ALS/PDC patients, further insight is required to trace neurodegenerative pathology in humans to BMAA.
See also
Glutamatergic system
Glutamic acid (flavor)
NMDA receptor antagonist
Dihydropyridine
References
Further reading
Invited Review
Food safety
Neurochemistry
Neurotrauma
Toxins | Excitotoxicity | [
"Chemistry",
"Biology",
"Environmental_science"
] | 2,357 | [
"Biochemistry",
"Toxins",
"Neurochemistry",
"Toxicology"
] |
1,043,867 | https://en.wikipedia.org/wiki/Energy%E2%80%93maneuverability%20theory | Energy–maneuverability theory is a model of aircraft performance. It was developed by Col. John Boyd, a fighter pilot, and Thomas P. Christie, a mathematician with the United States Air Force, and is useful in describing an aircraft's performance as the total of kinetic and potential energies or aircraft specific energy. It relates the thrust, weight, aerodynamic drag, wing area, and other flight characteristics of an aircraft into a quantitative model. This enables the combat capabilities of various aircraft or prospective design trade-offs to be predicted and compared.
Formula
All of these aspects of airplane performance are compressed into a single value by the following formula:
History
John Boyd, a U.S. jet fighter pilot in the Korean War, began developing the theory in the early 1960s. He teamed with mathematician Thomas Christie at Eglin Air Force Base to use the base's high-speed computer to compare the performance envelopes of U.S. and Soviet aircraft from the Korean and Vietnam Wars. They completed a two-volume report on their studies in 1964. Energy Maneuverability came to be accepted within the U.S. Air Force and brought about improvements in the requirements for the F-15 Eagle and later the F-16 Fighting Falcon fighters.
See also
Lagrangian mechanics
Notes
References
Hammond, Grant T. The Mind of War: John Boyd and American Security. Washington, D.C.: Smithsonian Institution Press, 2001. and .
Coram, Robert. Boyd: The Fighter Pilot Who Changed the Art of War. New York: Back Bay Books, 2002. and .
Wendl, M.J., G.G. Grose, J.L. Porter, and V.R. Pruitt. Flight/Propulsion Control Integration Aspects of Energy Management. Society of Automotive Engineers, 1974, p. 740480.
Aerospace engineering | Energy–maneuverability theory | [
"Engineering"
] | 373 | [
"Aerospace engineering"
] |
1,044,194 | https://en.wikipedia.org/wiki/Integral%20Equations%20and%20Operator%20Theory | Integral Equations and Operator Theory is a journal dedicated to operator theory and its applications to engineering and other mathematical sciences. As some approaches to the study of integral equations (theoretically and numerically) constitute a subfield of operator theory, the journal also deals with the theory of integral equations and hence of differential equations. The journal consists of two sections: a main section consisting of refereed papers and a second consisting of short announcements of important results, open problems, information, etc. It has been published monthly by Springer-Verlag since 1978. The journal is also available online by subscription.
The founding editor-in-chief of the journal, in 1978, was Israel Gohberg. Its current editor-in-chief is Christiane Tretter.
References
External links
Journal homepage
Mathematical analysis journals
Academic journals established in 1978 | Integral Equations and Operator Theory | [
"Mathematics"
] | 164 | [
"Mathematical analysis",
"Mathematical analysis journals"
] |
1,046,024 | https://en.wikipedia.org/wiki/Formal%20equivalence%20checking | Formal equivalence checking process is a part of electronic design automation (EDA), commonly used during the development of digital integrated circuits, to formally prove that two representations of a circuit design exhibit exactly the same behavior.
Equivalence checking and levels of abstraction
In general, there is a wide range of possible definitions of functional equivalence covering comparisons between different levels of abstraction and varying granularity of timing details.
The most common approach is to consider the problem of machine equivalence which defines two synchronous design specifications functionally equivalent if, clock by clock, they produce exactly the same sequence of output signals for any valid sequence of input signals.
Microprocessor designers use equivalence checking to compare the functions specified for the instruction set architecture (ISA) with a register transfer level (RTL) implementation, ensuring that any program executed on both models will cause an identical update of the main memory content. This is a more general problem.
A system design flow requires comparison between a transaction level model (TLM), e.g., written in SystemC and its corresponding RTL specification. Such a check is becoming of increasing interest in a system-on-a-chip (SoC) design environment.
Synchronous machine equivalence
The register transfer level (RTL) behavior of a digital chip is usually described with a hardware description language, such as Verilog or VHDL. This description is the golden reference model that describes in detail which operations will be executed during which clock cycle and by which pieces of hardware. Once the logic designers, by simulations and other verification methods, have verified register transfer description, the design is usually converted into a netlist by a logic synthesis tool. Equivalence is not to be confused with functional correctness, which must be determined by functional verification.
The initial netlist will usually undergo a number of transformations such as optimization, addition of Design For Test (DFT) structures, etc., before it is used as the basis for the placement of the logic elements into a physical layout. Contemporary physical design software will occasionally also make significant modifications (such as replacing logic elements with equivalent similar elements that have a higher or lower drive strength and/or area) to the netlist. Throughout every step of a very complex, multi-step procedure, the original functionality and the behavior described by the original code must be maintained. When the final tape-out is made of a digital chip, many different EDA programs and possibly some manual edits will have altered the netlist.
In theory, a logic synthesis tool guarantees that the first netlist is logically equivalent to the RTL source code. All the programs later in the process that make changes to the netlist also, in theory, ensure that these changes are logically equivalent to a previous version.
In practice, programs have bugs and it would be a major risk to assume that all steps from RTL through the final tape-out netlist have been performed without error. Also, in real life, it is common for designers to make manual changes to a netlist, commonly known as Engineering Change Orders, or ECOs, thereby introducing a major additional error factor. Therefore, instead of blindly assuming that no mistakes were made, a verification step is needed to check the logical equivalence of the final version of the netlist to the original description of the design (golden reference model).
Historically, one way to check the equivalence was to re-simulate, using the final netlist, the test cases that were developed for verifying the correctness of the RTL. This process is called gate level logic simulation. However, the problem with this is that the quality of the check is only as good as the quality of the test cases. Also, gate-level simulations are notoriously slow to execute, which is a major problem as the size of digital designs continues to grow exponentially.
An alternative way to solve this is to formally prove that the RTL code and the netlist synthesized from it have exactly the same behavior in all (relevant) cases. This process is called formal equivalence checking and is a problem that is studied under the broader area of formal verification.
A formal equivalence check can be performed between any two representations of a design: RTL <> netlist, netlist <> netlist or RTL <> RTL, though the latter is rare compared to the first two. Typically, a formal equivalence checking tool will also indicate with great precision at which point there exists a difference between two representations.
Methods
There are two basic technologies used for boolean reasoning in equivalence checking programs:
Binary decision diagrams, or BDDs: A specialized data structure designed to support reasoning about boolean functions. BDDs have become highly popular because of their efficiency and versatility.
Conjunctive Normal Form Satisfiability: SAT solvers returns an assignment to the variables of a propositional formula that satisfies it if such an assignment exists. Almost any boolean reasoning problem can be expressed as a SAT problem.
Commercial applications for equivalence checking
Major products in the Logic Equivalence Checking (LEC) area of EDA are:
FormalPro by Mentor Graphics
Questa SLEC by Mentor Graphics
Conformal by Cadence
Jasper by Cadence
Formality by Synopsys
VC Formal by Synopsys
360 EC by OneSpin Solutions
ATEC by ATEC
Generalizations
Equivalence Checking of Retimed Circuits: Sometimes it is helpful to move logic from one side of a register to another, and this complicates the checking problem.
Sequential Equivalence Checking: Sometimes, two machines are completely different at the combinational level, but should give the same outputs if given the same inputs. The classic example is two identical state machines with different encodings for the states. Since this cannot be reduced to a combinational problem, more general techniques are required.
Equivalence of Software Programs, i.e. checking if two well-defined programs that take N inputs and produce M outputs are equivalent: Conceptually, you can turn software into a state machine (that's what the combination of a compiler does, since a computer plus its memory form a very large state machine.) Then, in theory, various forms of property checking can ensure they produce the same output. This problem is even harder than sequential equivalence checking, since the outputs of the two programs may appear at different times; but it is possible, and researchers are working on it.
See also
Formal methods
References
Electronic Design Automation For Integrated Circuits Handbook, by Lavagno, Martin, and Scheffer, A survey of the field. This article was derived, with permission, from Volume 2, Chapter 4, Equivalence Checking, by Fabio Somenzi and Andreas Kuehlmann.
R.E. Bryant, Graph-based algorithms for Boolean function manipulation, IEEE Transactions on Computers., C-35, pp. 677–691, 1986. The original reference on BDDs.
Sequential equivalence checking for RTL models. Nikhil Sharma, Gagan Hasteer and Venkat Krishnaswamy. EE Times.
External links
CADP – provides equivalence checking tools for asynchronous designs
OneSpin 360 EC-FPGA – Functional correctness of FPGA synthesis from RTL code to final netlist
Electronic circuit verification
Formal methods | Formal equivalence checking | [
"Engineering"
] | 1,461 | [
"Software engineering",
"Formal methods"
] |
1,046,155 | https://en.wikipedia.org/wiki/Projection-valued%20measure | In mathematics, particularly in functional analysis, a projection-valued measure (or spectral measure) is a function defined on certain subsets of a fixed set and whose values are self-adjoint projections on a fixed Hilbert space. A projection-valued measure (PVM) is formally similar to a real-valued measure, except that its values are self-adjoint projections rather than real numbers. As in the case of ordinary measures, it is possible to integrate complex-valued functions with respect to a PVM; the result of such an integration is a linear operator on the given Hilbert space.
Projection-valued measures are used to express results in spectral theory, such as the important spectral theorem for self-adjoint operators, in which case the PVM is sometimes referred to as the spectral measure. The Borel functional calculus for self-adjoint operators is constructed using integrals with respect to PVMs. In quantum mechanics, PVMs are the mathematical description of projective measurements. They are generalized by positive operator valued measures (POVMs) in the same sense that a mixed state or density matrix generalizes the notion of a pure state.
Definition
Let denote a separable complex Hilbert space and a measurable space consisting of a set and a Borel σ-algebra on . A projection-valued measure is a map from to the set of bounded self-adjoint operators on satisfying the following properties:
is an orthogonal projection for all
and , where is the empty set and the identity operator.
If in are disjoint, then for all ,
for all
The second and fourth property show that if and are disjoint, i.e., , the images and are orthogonal to each other.
Let and its orthogonal complement denote the image and kernel, respectively, of . If is a closed subspace of then can be wrtitten as the orthogonal decomposition and is the unique identity operator on satisfying all four properties.
For every and the projection-valued measure forms a complex-valued measure on defined as
with total variation at most . It reduces to a real-valued measure when
and a probability measure when is a unit vector.
Example Let be a -finite measure space and, for all , let
be defined as
i.e., as multiplication by the indicator function on L2(X). Then defines a projection-valued measure. For example, if , , and there is then the associated complex measure which takes a measurable function and gives the integral
Extensions of projection-valued measures
If is a projection-valued measure on a measurable space (X, M), then the map
extends to a linear map on the vector space of step functions on X. In fact, it is easy to check that this map is a ring homomorphism. This map extends in a canonical way to all bounded complex-valued measurable functions on X, and we have the following.
The theorem is also correct for unbounded measurable functions but then will be an unbounded linear operator on the Hilbert space .
This allows to define the Borel functional calculus for such operators and then pass to measurable functions via the Riesz–Markov–Kakutani representation theorem. That is, if is a measurable function, then a unique measure exists such that
Spectral theorem
Let be a separable complex Hilbert space, be a bounded self-adjoint operator and the spectrum of . Then the spectral theorem says that there exists a unique projection-valued measure , defined on a Borel subset , such that
where the integral extends to an unbounded function when the spectrum of is unbounded.
Direct integrals
First we provide a general example of projection-valued measure based on direct integrals. Suppose (X, M, μ) is a measure space and let {Hx}x ∈ X be a μ-measurable family of separable Hilbert spaces. For every E ∈ M, let (E) be the operator of multiplication by 1E on the Hilbert space
Then is a projection-valued measure on (X, M).
Suppose , ρ are projection-valued measures on (X, M) with values in the projections of H, K. , ρ are unitarily equivalent if and only if there is a unitary operator U:H → K such that
for every E ∈ M.
Theorem. If (X, M) is a standard Borel space, then for every projection-valued measure on (X, M) taking values in the projections of a separable Hilbert space, there is a Borel measure μ and a μ-measurable family of Hilbert spaces {Hx}x ∈ X , such that is unitarily equivalent to multiplication by 1E on the Hilbert space
The measure class of μ and the measure equivalence class of the multiplicity function x → dim Hx completely characterize the projection-valued measure up to unitary equivalence.
A projection-valued measure is homogeneous of multiplicity n if and only if the multiplicity function has constant value n. Clearly,
Theorem. Any projection-valued measure taking values in the projections of a separable Hilbert space is an orthogonal direct sum of homogeneous projection-valued measures:
where
and
Application in quantum mechanics
In quantum mechanics, given a projection-valued measure of a measurable space to the space of continuous endomorphisms upon a Hilbert space ,
the projective space of the Hilbert space is interpreted as the set of possible (normalizable) states of a quantum system,
the measurable space is the value space for some quantum property of the system (an "observable"),
the projection-valued measure expresses the probability that the observable takes on various values.
A common choice for is the real line, but it may also be
(for position or momentum in three dimensions ),
a discrete set (for angular momentum, energy of a bound state, etc.),
the 2-point set "true" and "false" for the truth-value of an arbitrary proposition about .
Let be a measurable subset of and a normalized vector quantum state in , so that its Hilbert norm is unitary, . The probability that the observable takes its value in , given the system in state , is
We can parse this in two ways. First, for each fixed , the projection is a self-adjoint operator on whose 1-eigenspace are the states for which the value of the observable always lies in , and whose 0-eigenspace are the states for which the value of the observable never lies in .
Second, for each fixed normalized vector state , the association
is a probability measure on making the values of the observable into a random variable.
A measurement that can be performed by a projection-valued measure is called a projective measurement.
If is the real number line, there exists, associated to , a self-adjoint operator defined on by
which reduces to
if the support of is a discrete subset of .
The above operator is called the observable associated with the spectral measure.
Generalizations
The idea of a projection-valued measure is generalized by the positive operator-valued measure (POVM), where the need for the orthogonality implied by projection operators is replaced by the idea of a set of operators that are a non-orthogonal partition of unity. This generalization is motivated by applications to quantum information theory.
See also
Spectral theorem
Spectral theory of compact operators
Spectral theory of normal C*-algebras
Notes
References
*
Mackey, G. W., The Theory of Unitary Group Representations, The University of Chicago Press, 1976
G. Teschl, Mathematical Methods in Quantum Mechanics with Applications to Schrödinger Operators, https://www.mat.univie.ac.at/~gerald/ftp/book-schroe/, American Mathematical Society, 2009.
Varadarajan, V. S., Geometry of Quantum Theory V2, Springer Verlag, 1970.
Linear algebra
Measures (measure theory)
Spectral theory | Projection-valued measure | [
"Physics",
"Mathematics"
] | 1,625 | [
"Physical quantities",
"Measures (measure theory)",
"Quantity",
"Size",
"Linear algebra",
"Algebra"
] |
1,046,687 | https://en.wikipedia.org/wiki/Equal-loudness%20contour | An equal-loudness contour is a measure of sound pressure level, over the frequency spectrum, for which a listener perceives a constant loudness when presented with pure steady tones. The unit of measurement for loudness levels is the phon and is arrived at by reference to equal-loudness contours. By definition, two sine waves of differing frequencies are said to have equal-loudness level measured in phons if they are perceived as equally loud by the average young person without significant hearing impairment.
The Fletcher–Munson curves are one of many sets of equal-loudness contours for the human ear, determined experimentally by Harvey Fletcher and Wilden A. Munson, and reported in a 1933 paper entitled "Loudness, its definition, measurement and calculation" in the Journal of the Acoustical Society of America. Fletcher–Munson curves have been superseded and incorporated into newer standards. The definitive curves are those defined in ISO 226 from the International Organization for Standardization, which are based on a review of modern determinations made in various countries.
Amplifiers often feature a "loudness" button, known technically as loudness compensation, that boosts low and high-frequency components of the sound. These are intended to offset the apparent loudness fall-off at those frequencies, especially at lower volume levels. Boosting these frequencies produces a flatter equal-loudness contour that appears to be louder even at low volume, preventing the perceived sound from being dominated by the mid-frequencies where the ear is most sensitive.
Fletcher–Munson curves
The first research on the topic of how the ear hears different frequencies at different levels was conducted by Fletcher and Munson in 1933. Until recently, it was common to see the term Fletcher–Munson used to refer to equal-loudness contours generally, even though a re-determination was carried out by Robinson and Dadson in 1956, which became the basis for an ISO 226 standard.
The generic term equal-loudness contours is now preferred, of which the Fletcher–Munson curves are now a sub-set, and especially since a 2003 survey by ISO redefined the curves in a new standard.
Experimental determination
The human auditory system is sensitive to frequencies from about 20 Hz to a maximum of around 20,000 Hz, although the upper hearing limit decreases with age. Within this range, the human ear is most sensitive between 2 and 5 kHz, largely due to the resonance of the ear canal and the transfer function of the ossicles of the middle ear.
Fletcher and Munson first measured equal-loudness contours using headphones (1933). In their study, test subjects listened to pure tones at various frequencies and over 10 dB increments in stimulus intensity. For each frequency and intensity, the listener also listened to a reference tone at 1000 Hz. Fletcher and Munson adjusted the reference tone until the listener perceived that it had the same loudness as the test tone. Loudness, being a psychological quantity, is difficult to measure, so Fletcher and Munson averaged their results over many test subjects to derive reasonable averages. The lowest equal-loudness contour represents the quietest audible tone—the absolute threshold of hearing. The highest contour is the threshold of pain.
Churcher and King carried out a second determination in 1937, but their results and Fletcher and Munson's showed considerable discrepancies over parts of the auditory diagram.
In 1956 Robinson and Dadson produced a new experimental determination that they believed was more accurate. It became the basis for a standard (ISO 226) that was considered definitive until 2003, when ISO revised the standard on the basis of recent assessments by research groups worldwide.
Recent revision aimed at more precise determination – ISO 226:2023
Perceived discrepancies between early and more recent determinations led the International Organization for Standardization (ISO) to revise the standard curves in ISO 226. They did this in response to recommendations in a study coordinated by the Research Institute of Electrical Communication, Tohoku University, Japan. The study produced new curves by combining the results of several studies—by researchers in Japan, Germany, Denmark, UK, and the US. (Japan was the greatest contributor with about 40% of the data.)
This has resulted in the recent acceptance of a new set of curves standardized as ISO 226:2003. The report comments on the surprisingly large differences, and the fact that the original Fletcher–Munson contours are in better agreement with recent results than the Robinson–Dadson, which appear to differ by as much as 10–15 dB, especially in the low-frequency region, for reasons not explained.
According to the ISO report, the Robinson–Dadson results were the odd one out, differing more from the current standard than did the Fletcher–Munson curves. The report states that it is fortunate that the 40-phon Fletcher–Munson curve on which the A-weighting standard was based turns out to have been in agreement with modern determinations.
The report also comments on the large differences apparent in the low-frequency region, which remain unexplained. Possible explanations are:
The equipment used was not properly calibrated.
The criteria used for judging equal loudness at different frequencies had differed.
Subjects were not properly rested for days in advance, or were exposed to loud noise in traveling to the tests, which tensed the tensor tympani and stapedius muscles controlling low-frequency mechanical coupling.
Side versus frontal presentation
Real-life sounds from a reasonably distant source arrive as planar wavefronts. If the source of sound is directly in front of the listener, then both ears receive equal intensity, but at frequencies above about 1 kHz the sound that enters the ear canal is partially reduced by the head shadow, and also highly dependent on reflection off the pinna (outer ear). Off-centre sounds result in increased head masking at one ear, and subtle changes in the effect of the pinna, especially at the other ear. This combined effect of head-masking and pinna reflection is quantified in a set of curves in three-dimensional space referred to as head-related transfer functions (HRTFs). Frontal presentation is now regarded as preferable when deriving equal-loudness contours, and the latest ISO standard is specifically based on frontal and central presentation.
Because no HRTF is involved in normal headphone listening, equal-loudness curves derived using headphones are valid only for the special case of what is called side-presentation, which is not how we normally hear.
The Robinson–Dadson determination used loudspeakers, and for a long time the difference from the Fletcher–Munson curves was explained partly on the basis that the latter used headphones. However, the ISO report actually lists the latter as using compensated headphones, though it doesn't make clear how Robinson–Dadson achieved compensation.
Headphones versus loudspeaker testing
Good headphones, well sealed to the ear, provide a flat low-frequency pressure response to the ear canal, with low distortion even at high intensities. At low frequencies, the ear is purely pressure-sensitive, and the cavity formed between headphones and ear is too small to introduce modifying resonances. Headphone testing is, therefore, a good way to derive equal-loudness contours below about 500 Hz, though reservations have been expressed about the validity of headphone measurements when determining the actual threshold of hearing, based on the observation that closing off the ear canal produces increased sensitivity to the sound of blood flow within the ear, which the brain appears to mask in normal listening conditions. At high frequencies, headphone measurement becomes unreliable, and the various resonances of pinnae (outer ears) and ear canals are severely affected by proximity to the headphone cavity.
With speakers, the opposite is true. A flat low-frequency response is hard to obtain—except in free space high above ground, or in a very large and anechoic chamber that is free from reflections down to 20 Hz. Until recently, it was not possible to achieve high levels at frequencies down to 20 Hz without high levels of harmonic distortion. Even today, the best speakers are likely to generate around 1 to 3% of total harmonic distortion, corresponding to 30 to 40 dB below fundamental. This is not good enough, given the steep rise in loudness (rising to as much as 24 dB per octave) with frequency revealed by the equal-loudness curves below about 100 Hz. A good experimenter must ensure that trial subjects really hear the fundamental and not harmonics—especially the third harmonic, which is especially strong as a speaker cone's travel becomes limited as its suspension reaches the limit of compliance. A possible way around the problem is to use acoustic filtering, such as by resonant cavity, in the speaker setup. A flat free-field high-frequency response up to 20 kHz, on the other hand, is comparatively easy to achieve with modern speakers on-axis. These effects must be considered when comparing results of various attempts to measure equal-loudness contours.
Relevance to sound level and noise measurements
The A-weighting curve—in widespread use for noise measurement—is said to have been based on the 40-phon Fletcher–Munson curve. However, research in the 1960s demonstrated that determinations of equal-loudness made using pure tones are not directly relevant to our perception of noise. This is because the cochlea in our inner ear analyzes sounds in terms of spectral content, each "hair-cell" responding to a narrow band of frequencies known as a critical band. The high-frequency bands are wider in absolute terms than the low-frequency bands, and therefore "collect" proportionately more power from a noise source. However, when more than one critical band is stimulated, the signals to the brain add the various bands to produce the impressions of loudness. For these reasons equal-loudness curves derived using noise bands show an upwards tilt above 1 kHz and a downward tilt below 1 kHz when compared to the curves derived using pure tones.
Various weighting curves were derived in the 1960s, in particular as part of the DIN 4550 standard for audio quality measurement, which differed from the A-weighting curve, showing more of a peak around 6 kHz. These gave a more meaningful subjective measure of noise on audio equipment, especially on the newly invented compact cassette tape recorders with Dolby noise reduction, which were characterized by a noise spectrum dominated by the higher frequencies.
BBC Research conducted listening trials in an attempt to find the best weighting curve and rectifier combination for use when measuring noise in broadcast equipment, examining the various new weighting curves in the context of noise rather than tones, confirming that they were much more valid than A-weighting when attempting to measure the subjective loudness of noise. This work also investigated the response of human hearing to tone-bursts, clicks, pink noise and a variety of other sounds that, because of their brief impulsive nature, do not give the ear and brain sufficient time to respond. The results were reported in BBC Research Report EL-17 1968/8 entitled The Assessment of Noise in Audio Frequency Circuits.
The ITU-R 468 noise weighting curve, originally proposed in CCIR recommendation 468, but later adopted by numerous standards bodies (IEC, BSI, JIS, ITU) was based on the research, and incorporates a special quasi-peak detector to account for our reduced sensitivity to short bursts and clicks. It is widely used by Broadcasters and audio professionals when they measure noise on broadcast paths and audio equipment, so they can subjectively compare equipment types with different noise spectra and characteristics.
See also
A-weighting
Audio quality measurement
Audiogram
CCIR (ITU) 468 Noise Weighting
dB(A)
ITU-R 468 noise weighting
Listener fatigue
Luminosity function, the same concept in vision
Mel scale
Pure tone audiometry
Robinson–Dadson curves
Sound level meter
Weighting filter
Notes
References
Audio Engineer's Reference Book, 2nd Ed., 1999, edited Michael Talbot Smith, Focal Press.
An Introduction to the Psychology of Hearing 5th ed, Brian C.J. Moore, Elsevier Press.
External links
ISO Standard
Precise and Full-range Determination of Two-dimensional Equal Loudness Contours
Fletcher–Munson is not Robinson–Dadson (PDF)
Full Revision of International Standards for Equal-Loudness Level Contours (ISO 226)
Test your hearing – A tool for measuring your equal-loudness contours
Equal-loudness contour measurements in detail
Evaluation of Loudness-level weightings and LLSEL JASA
A Model of Loudness Applicable to Time-Varying Sounds AESJ Article
Psychoacoustics
Audio engineering
ISO standards
Sound
Acoustics | Equal-loudness contour | [
"Physics",
"Engineering"
] | 2,604 | [
"Electrical engineering",
"Audio engineering",
"Classical mechanics",
"Acoustics"
] |
2,260,140 | https://en.wikipedia.org/wiki/Surface%20roughness | Surface roughness can be regarded as the quality of a surface of not being smooth and it is hence linked to human (haptic) perception of the surface texture. From a mathematical perspective it is related to the spatial variability structure of surfaces, and inherently it is a multiscale property. It has different interpretations and definitions depending on the disciplines considered.
In surface metrology
Surface roughness, often shortened to roughness, is a component of surface finish (surface texture). It is quantified by the deviations in the direction of the normal vector of a real surface from its ideal form. If these deviations are large, the surface is rough; if they are small, the surface is smooth. In surface metrology, roughness is typically considered to be the high-frequency, short-wavelength component of a measured surface. However, in practice it is often necessary to know both the amplitude and frequency to ensure that a surface is fit for a purpose.
Roughness plays an important role in determining how a real object will interact with its environment. In tribology, rough surfaces usually wear more quickly and have higher friction coefficients than smooth surfaces. Roughness is often a good predictor of the performance of a mechanical component, since irregularities on the surface may form nucleation sites for cracks or corrosion. On the other hand, roughness may promote adhesion. Generally speaking, rather than scale specific descriptors, cross-scale descriptors such as surface fractality provide more meaningful predictions of mechanical interactions at surfaces including contact stiffness and static friction.
Although a high roughness value is often undesirable, it can be difficult and expensive to control in manufacturing. For example, it is difficult and expensive to control surface roughness of fused deposition modelling (FDM) manufactured parts. Decreasing the roughness of a surface usually increases its manufacturing cost. This often results in a trade-off between the manufacturing cost of a component and its performance in application.
Roughness can be measured by manual comparison against a "surface roughness comparator" (a sample of known surface roughness), but more generally a surface profile measurement is made with a profilometer. These can be of the contact variety (typically a diamond stylus) or optical (e.g.: a white light interferometer or laser scanning confocal microscope).
However, controlled roughness can often be desirable. For example, a gloss surface can be too shiny to the eye and too slippery to the finger (a touchpad is a good example) so a controlled roughness is required. This is a case where both amplitude and frequency are very important.
Parameters
A roughness value can either be calculated on a profile (line) or on a surface (area). The profile roughness parameter (, , ...) are more common. The area roughness parameters (, , ...) give more significant values.
Profile roughness parameters
The profile roughness parameters are included in BS EN ISO 4287:2000 British standard, identical with the ISO 4287:1997 standard. The standard is based on the ″M″ (mean line) system.
There are many different roughness parameters in use, but is by far the most common, though this is often for historical reasons and not for particular merit, as the early roughness meters could only measure . Other common parameters include , , and . Some parameters are used only in certain industries or within certain countries. For example, the family of parameters is used mainly for cylinder bore linings, and the Motif parameters are used primarily in the French automotive industry. The MOTIF method provides a graphical evaluation of a surface profile without filtering waviness from roughness. A motif consists of the portion of a profile between two peaks and the final combinations of these motifs eliminate ″insignificant″ peaks and retains ″significant″ ones. Please note that is a dimensional unit that can be micrometer or microinch.
Since these parameters reduce all of the information in a profile to a single number, great care must be taken in applying and interpreting them. Small changes in how the raw profile data is filtered, how the mean line is calculated, and the physics of the measurement can greatly affect the calculated parameter. With modern digital equipment, the scan can be evaluated to make sure there are no obvious glitches that skew the values.
Because it may not be obvious to many users what each of the measurements really mean, a simulation tool allows a user to adjust key parameters, visualizing how surfaces which are obviously different to the human eye are differentiated by the measurements. For example, fails to distinguish between two surfaces where one is composed of peaks on an otherwise smooth surface and the other is composed of troughs of the same amplitude. Such tools can be found in app format.
By convention every 2D roughness parameter is a capital followed by additional characters in the subscript. The subscript identifies the formula that was used, and the means that the formula was applied to a 2D roughness profile. Different capital letters imply that the formula was applied to a different profile. For example, is the arithmetic average of the roughness profile, is the arithmetic average of the unfiltered raw profile, and is the arithmetic average of the 3D roughness.
Each of the formulas listed in the tables assumes that the roughness profile has been filtered from the raw profile data and the mean line has been calculated. The roughness profile contains ordered, equally spaced points along the trace, and is the vertical distance from the mean line to the data point. Height is assumed to be positive in the up direction, away from the bulk material.
Amplitude parameters
Amplitude parameters characterize the surface based on the vertical deviations of the roughness profile from the mean line. Many of them are closely related to the parameters found in statistics for characterizing population samples. For example, is the arithmetic average value of filtered roughness profile determined from deviations about the center line within the evaluation length and is the range of the collected roughness data points.
The arithmetic average roughness, , is the most widely used one-dimensional roughness parameter.
Here is a common conversion table with roughness grade numbers:
Slope, spacing and counting parameters
Slope parameters describe characteristics of the slope of the roughness profile. Spacing and counting parameters describe how often the profile crosses certain thresholds. These parameters are often used to describe repetitive roughness profiles, such as those produced by turning on a lathe.
Other "frequency" parameters are Sm, a and q. Sm is the mean spacing between peaks. Just as with real mountains it is important to define a "peak". For Sm the surface must have dipped below the mean surface before rising again to a new peak. The average wavelength a and the root mean square wavelength q are derived from a. When trying to understand a surface that depends on both amplitude and frequency it is not obvious which pair of metrics optimally describes the balance, so a statistical analysis of pairs of measurements can be performed (e.g.: Rz and a or Ra and Sm) to find the strongest correlation.
Common conversions:
Bearing ratio curve parameters
These parameters are based on the bearing ratio curve (also known as the Abbott-Firestone curve.) This includes the Rk family of parameters.
Fractal theory
The mathematician Benoît Mandelbrot has pointed out the connection between surface roughness and fractal dimension. The description provided by a fractal at the microroughness level may allow the control of the material properties and the type of the occurring chip formation. But fractals cannot provide a full-scale representation of a typical machined surface affected by tool feed marks; it ignores the geometry of the cutting edge. (J. Paulo Davim, 2010, op.cit.). Fractal descriptors of surfaces have an important role to play in correlating physical surface properties with surface structure. Across multiple fields, connecting physical, electrical and mechanical behavior with conventional surface descriptors of roughness or slope has been challenging. By employing measures of surface fractality together with measures of roughness or surface shape, certain interfacial phenomena including contact mechanics, friction and electrical contact resistance, can be better interpreted with respect to surface structure.
Areal roughness parameters
Areal roughness parameters are defined in the ISO 25178 series. The resulting values are Sa, Sq, Sz,... Many optical measurement instruments are able to measure the surface roughness over an area. Area measurements are also possible with contact measurement systems. Multiple, closely spaced 2D scans are taken of the target area. These are then digitally stitched together using relevant software, resulting in a 3D image and accompanying areal roughness parameters.
Practical effects
Surface structure plays a key role in governing contact mechanics, that is to say the mechanical behavior exhibited at an interface between two solid objects as they approach each other and transition from conditions of non-contact to full contact. In particular, normal contact stiffness is governed predominantly by asperity structures (roughness, surface slope and fractality) and material properties.
In terms of engineering surfaces, roughness is considered to be detrimental to part performance. As a consequence, most manufacturing prints establish an upper limit on roughness, but not a lower limit. An exception is in cylinder bores where oil is retained in the surface profile and a minimum roughness is required.
Surface structure is often closely related to the friction and wear properties of a surface. A surface with a higher fractal dimension, large value, or a positive , will usually have somewhat higher friction and wear quickly. The peaks in the roughness profile are not always the points of contact. The form and waviness (i.e. both amplitude and frequency) must also be considered.
In Earth Sciences
In Earth Sciences (e.g., Shepard et al., 2001; Smith, 2014) and Ecology (e.g., Riley et al., 1999; Sappington et al., 2007) surface roughness has a quite broad meaning (e.g. Smith, 2014), with multiple definitions, and generally it is considered a multi-scale property related to surface spatial variability; it is often referred as surface texture (e.g., Trevisani et al., 2012), given the evident analogies to image texture (e.g., Haralick et al. 1973; Lucieer and Stein, 2005) when the analysis is performed on digital elevation models. From this perspective there are various interlinks with methodologies related to geostatistics (e.g., Herzfeld and Higginson, 1996), fractal analysis (e.g. Bez and Bertrand, 2011) and pattern recognition (e.g., Ojala et al. 2002), including many interrelations with remote sensing approaches. In the context of geomorphometry (or just morphometry, Pike, 2000) the applications cover many research topics in applied and environmental geology, geomorphology, geostructural studies and soil science. An example (non exhaustive) of the related literature can be found in the following articles:
Cavalli and Marchi, 2008
Dusséaux and Vannier, 2022
Evans et al., 2022
Frankel and Dolan 2007
Glenn et al. 2006
Grohmann et al., 2011
Guth, 1999
Lindsay, 2019
Misiuk et al., 2021
Pollyea and Fairley, 2011
Trevisani and Rocca, 2015
Trevisani et al. 2023
Woodcock, 1977
Soil-surface roughness
Soil-surface roughness (SSR) refers to the vertical variations present in the micro- and macro-relief of a soil surface, as well as their stochastic distribution. There are four distinct classes of SSR, each one of them representing a characteristic vertical length scale; the first class includes microrelief variations from individual soil grains to aggregates on the order of 0.053–2.0 mm; the second class consists of variations due to soil clods ranging between 2 and 100 mm; the third class of soil surface roughness is systematic elevation differences due to tillage, referred to as oriented roughness (OR), ranging between 100 and 300 mm; the fourth class includes planar curvature, or macro-scale topographic features.
The two first classes account for the so-called microroughness, which has been shown to be largely influenced on an event and seasonal timescale by rainfall and tillage, respectively. Microroughness is most commonly quantified by means of the Random Roughness, which is essentially the standard deviation of bed surface elevation data around the mean elevation, after correction for slope using the best-fit plane and removal of tillage effects in the individual height readings. Rainfall impact can lead to either a decay or increase in microroughnesss, depending upon initial microroughness conditions and soil properties. On rough soil surfaces, the action of rainsplash detachment tends to smoothen the edges of soil surface roughness, leading to an overall decrease in RR. However, a recent study which examined the response of smooth soil surfaces on rainfall showed that RR can considerably increase for low initial microroughness length scales in the order of 0 – 5 mm. It was also shown that the increase or decrease is consistent among various SSR indices.
See also
Discontinuity (Geotechnical engineering)
Rugosity
Normal contact stiffness
Surface finish
Surface metrology
Surface roughness measurement ISO 25178
Waviness
Asperity (materials science)
References
External links
Surface Metrology Guide
Roughness terminology
Ra and Rz description
Surface Roughness (Finish) Review and Equations
SPE (Surface Profile Explorer)
Online calculator to convert roughness parameters Ra and Rz
Enache, Ştefănuţă, La qualité des surfaces usinées (Transl.: Quality of machined surfaces).Dunod, Paris, 1972, 343 pp.
Husu, A.P., Vitenberg, Iu., R., Palmov, V. A., Sherohovatost poverhnostei (Teoretiko-veroiatnostnii podhod) (Transl.: Surface roughness (theoretical-probabilistic approach)), Izdatelstvo "Nauka", Moskva, 1975, 342 pp.
Davim, J. Paulo, Surface Integrity in Machining, Springer-Verlag London Limited 2010,
Whitehouse, D. Handbook of Surface Metrology, Institute of Physics Publishing for Rank Taylor-Hobson Co., Bristol 1996
Geostatistical-based tools for surface roughness or image texture analysis:https://doi.org/10.5281/zenodo.7132160
Tribology
Metalworking terminology
Mechanical engineering | Surface roughness | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,048 | [
"Tribology",
"Applied and interdisciplinary physics",
"Materials science",
"Surface science",
"Mechanical engineering"
] |
2,260,942 | https://en.wikipedia.org/wiki/Shear%20strength | In engineering, shear strength is the strength of a material or component against the type of yield or structural failure when the material or component fails in shear. A shear load is a force that tends to produce a sliding failure on a material along a plane that is parallel to the direction of the force. When a paper is cut with scissors, the paper fails in shear.
In structural and mechanical engineering, the shear strength of a component is important for designing the dimensions and materials to be used for the manufacture or construction of the component (e.g. beams, plates, or bolts). In a reinforced concrete beam, the main purpose of reinforcing bar (rebar) stirrups is to increase the shear strength.
Equations
For shear stress applies
where
is major principal stress and
is minor principal stress.
In general: ductile materials (e.g. aluminum) fail in shear, whereas brittle materials (e.g. cast iron) fail in tension .
To calculate:
Given total force at failure (F) and the force-resisting area (e.g. the cross-section of a bolt loaded in shear), ultimate shear strength () is:
For average shear stress
where
is the average shear stress,
is the shear force applied to each section of the part, and
is the area of the section.
Average shear stress can also be defined as the total force of as
This is only the average stress, actual stress distribution is not uniform. In real world applications, this equation only gives an approximation and the maximum shear stress would be higher. Stress is not often equally distributed across a part so the shear strength would need to be higher to account for the estimate.
Comparison
As a very rough guide relating tensile, yield, and shear strengths:
USS: Ultimate Shear Strength, UTS: Ultimate Tensile Strength, SYS: Shear Yield Stress, TYS: Tensile Yield Stress
There are no published standard values for shear strength like with tensile and yield strength. Instead, it is common for it to be estimated as 60% of the ultimate tensile strength. Shear strength can be measured by a torsion test where it is equal to their torsional strength.
When values measured from physical samples are desired, a number of testing standards are available, covering different material categories and testing conditions. In the US, ASTM standards for measuring shear strength include ASTM B769, B831, D732, D4255, D5379, and D7078. Internationally, ISO testing standards for shear strength include ISO 3597, 12579, and 14130.
See also
Shear modulus
Shear stress
Shear strain
Shear strength (soil)
Shear strength (Discontinuity)
Strength of materials
Tensile strength
References
Shear strength | Shear strength | [
"Engineering"
] | 560 | [
"Structural engineering",
"Shear strength",
"Mechanical engineering"
] |
2,262,238 | https://en.wikipedia.org/wiki/Nanaerobe | Nanaerobes are organisms that cannot grow in the presence of micromolar concentrations of oxygen, but can grow with and benefit from the presence of nanomolar concentrations of oxygen (e.g. Bacteroides fragilis). Like other anaerobes, these organisms do not require oxygen for growth. This growth benefit requires the expression of an oxygen respiratory chain that is typically associated with microaerophilic respiration. Recent studies suggest that respiration in low concentrations of oxygen is an ancient process which predates the emergence of oxygenic photosynthesis.
References
Cellular respiration | Nanaerobe | [
"Chemistry",
"Biology"
] | 120 | [
"Biochemistry",
"Cellular respiration",
"Metabolism"
] |
2,262,293 | https://en.wikipedia.org/wiki/Rheometer | A rheometer is a laboratory device used to measure the way in which a viscous fluid (a liquid, suspension or slurry) flows in response to applied forces. It is used for those fluids which cannot be defined by a single value of viscosity and therefore require more parameters to be set and measured than is the case for a viscometer. It measures the rheology of the fluid.
There are two distinctively different types of rheometers. Rheometers that control the applied shear stress or shear strain are called rotational or shear rheometers, whereas rheometers that apply extensional stress or extensional strain are extensional rheometers.
Rotational or shear type rheometers are usually designed as either a native strain-controlled instrument (control and apply a user-defined shear strain which can then measure the resulting shear stress) or a native stress-controlled instrument (control and apply a user-defined shear stress and measure the resulting shear strain).
Meanings and origin
The word rheometer comes from the Greek, and means a device for measuring main flow. In the 19th century it was commonly used for devices to measure electric current, until the word was supplanted by galvanometer and ammeter. It was also used for the measurement of the flow of liquids, in medical practice (flow of blood) and in civil engineering (flow of water). This latter use persisted to the second half of the 20th century in some areas. Following the coining of the term rheology the word came to be applied to instruments for measuring the character rather than quantity of flow, and the other meanings are obsolete. (Principal Source: Oxford English Dictionary) The principle and working of rheometers is described in several texts.
Types of shear rheometer
Shearing geometries
Four basic shearing planes can be defined according to their geometry,
Couette drag plate flow
Cylindrical flow
Poiseuille flow in a tube and
Plate-plate flow
The various types of shear rheometers then use one or a combination of these geometries.
Linear shear
One example of a linear shear rheometer is the Goodyear linear skin rheometer, which is used to test cosmetic cream formulations, and for medical research purposes to quantify the elastic properties of tissue.
The device works by attaching a linear probe to the surface of the tissue under test, a controlled cyclical force is applied, and the resultant shear force measured using a load cell. Displacement is measured using a Linear variable differential transformer (LVDT). Thus the basic stress–strain parameters are captured and analysed to derive the dynamic spring rate of the tissue under tests.
Pipe or capillary
Liquid is forced through a tube of constant cross-section and precisely known dimensions under conditions of laminar flow. Either the flow-rate or the pressure drop are fixed and the other measured. Knowing the dimensions, the flow-rate can be converted into a value for the shear rate and the pressure drop into a value for the shear stress. Varying the pressure or flow allows a flow curve to be determined. When a relatively small amount of fluid is available for rheometric characterization, a microfluidic rheometer with embedded pressure sensors can be used to measure pressure drop for a controlled flow rate.
Capillary rheometers are especially advantageous for characterization of therapeutic protein solutions since it determines the ability to be syringed. Additionally, there is an inverse relationship between the rheometry and solution stability, as well as thermodynamic interactions.
Dynamic shear rheometer
A dynamic shear rheometer, commonly known as DSR is used for research and development as well as for quality control in the manufacturing of a wide range of materials. Dynamic shear rheometers have been used since 1993 when Superpave was used for characterising and understanding high temperature rheological properties of asphalt binders in both the molten and solid state and is fundamental in order to formulate the chemistry and predict the end-use performance of these materials.
Rotational cylinder
The liquid is placed within the annulus of one cylinder inside another. One of the cylinders is rotated at a set speed. This determines the shear rate inside the annulus. The liquid tends to drag the other cylinder round, and the force it exerts on that cylinder (torque) is measured, which can be converted to a shear stress.
One version of this is the Fann V-G Viscometer, which runs at two speeds, (300 and 600 rpm) and therefore only gives two points on the flow curve. This is sufficient to define a Bingham plastic model which was once widely used in the oil industry for determining the flow character of drilling fluids. In recent years rheometers that spin at 600, 300, 200, 100, 6 & 3 RPM have become more commonplace. This allows for more complex fluids models such as Herschel–Bulkley to be used. Some models allow the speed to be continuously increased and decreased in a programmed fashion, which allows the measurement of time-dependent properties.
Cone and plate
The liquid is placed on horizontal plate and a shallow cone placed into it. The angle between the surface of the cone and the plate is around 1–2 degrees but can vary depending on the types of tests being run. Typically the plate is rotated and the torque on the cone measured. A well-known version of this instrument is the Weissenberg rheogoniometer, in which the movement of the cone is resisted by a thin piece of metal which twists—known as a torsion bar. The known response of the torsion bar and the degree of twist give the shear stress, while the rotational speed and cone dimensions give the shear rate. In principle the Weissenberg rheogoniometer is an absolute method of measurement providing it is accurately set up. Other instruments operating on this principle may be easier to use but require calibration with a known fluid.
Cone and plate rheometers can also be operated in an oscillating mode to measure elastic properties, or in combined rotational and oscillating modes.
Basic concepts of shear rheometer
In the past, devices with controlled strain or strain rate (CR rheometers) were distinguished from rheometers with controlled stress (CS rheometers) depending on the measuring principle.
In a controlled strain (CR) rheometer, the sample is subjected to displacement or speed (strain or strain rate) using a DC motor, and the resulting torque (stress) is measured separately using an additional force-torque sensor (torque compensation transducer). The electric current used to generate the displacement or speed of the motor is not used as a measure of the torque acting in the sample. This mode of operation is also referred to as separate motor transducer mode (SMT).
Deflection angle/strain and shear rate are set by the motor based on the position control of the optical encoder in the lower part.
Sample reaction (the stress acting within the sample) is measured by an additional force-torque transducer (torque re-balance transducer)
The separation of drive and torque measurement has advantages in strain-controlled tests, since the motor's moment of inertia has no influence on the measured torque.
Limitations of the SMT mode can be found in stress-controlled measurements (e.g. creep tests)
In a controlled-stress (CS) rheometer, the torque acting in the sample is determined directly from the electrical torque generated in the motor. With such a design, no separate torque sensor is required. Usually, this mode of operation is described as combined motor-transducer mode (CMT).
The stress acting in the sample is determined directly from the torque generated in the motor, which is required to deform the sample.
Deflection angle/strain and shear rate are determined by the use of an optical encoder.
Single-motor rheometers allow characterization of samples in either strain/shear rate or shear stress-controlled tests
Since only one actor (motor) is required, the single-motor rheometer can be easily combined with additional application-specific accessories that enable the study of material properties in a variety of different applications.
Limitations may occur from less precise data evaluation in the transient regime of start-up shear tests.
Nowadays, there are device concepts that allow both working modes, the combined motor transducer mode and the separate motor transducer mode, by using two motors in one device. The use of only one motor enables measurements to be made in the combined motor transducer mode. Using both motors allows working in the separate motor transducer mode, where one motor is used to deform the sample while the other motor is used to record the torque acting in the sample. Furthermore, this concept allows for additional modes of operation, such as counter-rotating mode, where both motors can rotate or oscillate in opposite directions. This mode of operation is used, for example, to increase the maximum achievable shear rate range or for advanced rheooptical characterization of samples.
Types of extensional rheometer
The development of extensional rheometers has proceeded more slowly than shear rheometers, due to the challenges associated with generating a homogeneous extensional flow. Firstly, interactions of the test fluid or melt with solid interfaces will result in a component of shear flow, which will compromise the results. Secondly, the strain history of all the material elements must be controlled and known. Thirdly, the strain rates and strain levels must be high enough to stretch the polymeric chains beyond their normal radius of gyration, requiring instrumentation with a large range of deformation rates and a large travel distance.
Commercially available extensional rheometers have been segregated according to their applicability to viscosity ranges. Materials with a viscosity range from approximately 0.01 to 1 Pa.s. (most polymer solutions) are best characterized with capillary breakup rheometers, opposed jet devices, or contraction flow systems. Materials with a viscosity range from approximately 1 to 1000 Pa.s. are used in filament stretching rheometers. Materials with a high viscosity >1000 Pa.s., such as polymer melts, are best characterized by constant-length devices.
Extensional rheometry is commonly performed on materials that are subjected to a tensile deformation. This type of deformation can occur during processing, such as injection molding, fiber spinning, extrusion, blow-molding, and coating flows. It can also occur during use, such as decohesion of adhesives, pumping of hand soaps, and handling of liquid food products.
A list of currently and previously marketed commercially available extensional rheometers is shown in the table below.
Commercially available extensional rheometers
Rheotens
Rheotens is a fiber spinning rheometer, suitable for polymeric melts. The material is pumped from an upstream tube, and a set of wheels elongates the strand. A force transducer mounted on one of the wheels measures the resultant extensional force. Because of the pre-shear induced as the fluid is transported through the upstream tube, a true extensional viscosity is difficult to obtain. However, the Rheotens is useful to compare the extensional flow properties of a homologous set of materials.
CaBER
The CaBER is a capillary breakup rheometer. A small quantity of material is placed between plates, which are rapidly stretched to a fixed level of strain. The midpoint diameter is monitored as a function of time as the fluid filament necks and breaks up under the combined forces of surface tension, gravity, and viscoelasticity. The extensional viscosity can be extracted from the data as a function of strain and strain rate. This system is useful for low viscosity fluids, inks, paints, adhesives, and biological fluids.
FiSER
The FiSER (filament stretching extensional rheometer) is based on the works by Sridhar et al. and Anna et al. In this instrument, a set of linear motors drive a fluid filament apart at an exponentially increasing velocity while measuring force and diameter as a function of time and position. By deforming at an exponentially increasing rate, a constant strain rate can be achieved in the samples (barring endplate flow limitations). This system can monitor the strain-dependent extensional viscosity, as well as stress decay following flow cessation. A detailed presentation on the various uses of filament stretching rheometry can be found on the MIT web site.
Sentmanat
The Sentmanat extensional rheometer (SER) is actually a fixture that can be field installed on shear rheometers. A film of polymer is wound on two rotating drums, which apply constant or variable strain rate extensional deformation on the polymer film. The stress is determined from the torque exerted by the drums.
Other types of extensional rheometers
Acoustic rheometer
Acoustic rheometers employ a piezo-electric crystal that can easily launch a successive wave of extensions and contractions into the fluid. This non-contact method applies an oscillating extensional stress. Acoustic rheometers measure the sound speed and attenuation of ultrasound for a set of frequencies in the megahertz range. Sound speed is a measure of system elasticity. It can be converted into fluid compressibility. Attenuation is a measure of viscous properties. It can be converted into viscous longitudinal modulus. In the case of a Newtonian liquid, attenuation yields information on the volume viscosity. This type of rheometer works at much higher frequencies than others. It is suitable for studying effects with much shorter relaxation times than any other rheometer.
Falling plate
A simpler version of the filament stretching rheometer, the falling plate rheometer sandwiches liquid between two solid surfaces. The top plate is fixed, and bottom plate falls under the influence of gravity, drawing out a string of the liquid.
Capillary/contraction flow
Other systems involve liquid going through an orifice, expanding from a capillary, or sucked up from a surface into a column by a vacuum. A pressurized capillary rheometer can be used to design thermal treatments of fluid food. This instrumentation could help prevent over and under-processing of fluid food because extrapolation to high temperatures would not be necessary.
See also
Acoustic rheometer
Dynamic shear rheometer
Food rheology
Piezometer
Rheometry
References
K. Walters (1975) Rheometry (Chapman & Hall)
A.S.Dukhin and P.J.Goetz "Ultrasound for characterizing colloids", Elsevier, (2002)
External links
See Dynamic Shear Rheometer by Cooper Research Technology
Presentation on alternative uses of rheometers
Fluid dynamics
Measuring instruments
Tribology | Rheometer | [
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 3,091 | [
"Tribology",
"Chemical engineering",
"Materials science",
"Measuring instruments",
"Surface science",
"Mechanical engineering",
"Piping",
"Fluid dynamics"
] |
2,262,585 | https://en.wikipedia.org/wiki/Software%20package%20metrics | Various software package metrics are used in modular programming. They have been mentioned by Robert Cecil Martin in his 2002 book Agile software development: principles, patterns, and practices.
The term software package here refers to a group of related classes in object-oriented programming.
Number of classes and interfaces: The number of concrete and abstract classes (and interfaces) in the package is an indicator of the extensibility of the package.
Afferent couplings (Ca): The number of classes in other packages that depend upon classes within the package is an indicator of the package's responsibility. Afferent couplings signal inward.
Efferent couplings (Ce): The number of classes in other packages that the classes in a package depend upon is an indicator of the package's dependence on externalities. Efferent couplings signal outward.
Abstractness (A): The ratio of the number of abstract classes (and interfaces) in the analyzed package to the total number of classes in the analyzed package. The range for this metric is 0 to 1, with A=0 indicating a completely concrete package and A=1 indicating a completely abstract package.
Instability (I): The ratio of efferent coupling (Ce) to total coupling (Ce + Ca) such that I = Ce / (Ce + Ca). This metric is an indicator of the package's resilience to change. The range for this metric is 0 to 1, with I=0 indicating a completely stable package and I=1 indicating a completely unstable package.
Distance from the main sequence (D): The perpendicular distance of a package from the idealized line A + I = 1. D is calculated as D = | A + I - 1 |. This metric is an indicator of the package's balance between abstractness and stability. A package squarely on the main sequence is optimally balanced with respect to its abstractness and stability. Ideal packages are either completely abstract and stable (I=0, A=1) or completely concrete and unstable (I=1, A=0). The range for this metric is 0 to 1, with D=0 indicating a package that is coincident with the main sequence and D=1 indicating a package that is as far from the main sequence as possible.
Package dependency cycles: Package dependency cycles are reported along with the hierarchical paths of packages participating in package dependency cycles.
See also
Dependency inversion principle – a method to reduce coupling (Martin 2002:127).
References
External links
OO Metrics tutorial explains package metrics with examples, but gets the Instability index wrong; see page 262 of Martin's Agile Software Development: Principles, Patterns and Practices. Pearson Education. .
Software metrics
Object-oriented programming | Software package metrics | [
"Mathematics",
"Engineering"
] | 558 | [
"Software engineering",
"Quantity",
"Metrics",
"Software metrics"
] |
2,262,935 | https://en.wikipedia.org/wiki/Reissert%20indole%20synthesis | The Reissert indole synthesis is a series of chemical reactions designed to synthesize indole or substituted-indoles (4 and 5) from ortho-nitrotoluene 1 and diethyl oxalate 2.
Potassium ethoxide has been shown to give better results than sodium ethoxide.
Reaction mechanism
The first step of the synthesis is the condensation of o-nitrotoluene 1 with a diethyl oxalate 2 to give ethyl o-nitrophenylpyruvate 3. The reductive cyclization of 3 with zinc in acetic acid gives indole-2-carboxylic acid 4. If desired, 4 can be decarboxylated with heat to give indole 5.
Variations
Butin modification
In an intramolecular version of the Reissert reaction, a furan ring-opening provides the carbonyl necessary for cyclization to form an indole. A ketone side chain is present in the final product, allowing further modifications.
See also
Leimgruber-Batcho indole synthesis
References
Indole forming reactions
Name reactions | Reissert indole synthesis | [
"Chemistry"
] | 235 | [
"Name reactions",
"Ring forming reactions",
"Organic reactions"
] |
2,263,246 | https://en.wikipedia.org/wiki/Telluric%20acid | Telluric acid, or more accurately orthotelluric acid, is a chemical compound with the formula , often written as . It is a white crystalline solid made up of octahedral molecules which persist in aqueous solution. In the solid state, there are two forms, rhombohedral and monoclinic, and both contain octahedral molecules, containing one hexavalent tellurium (Te) atom in the +6 oxidation state, attached to six hydroxyl (–OH) groups, thus, it can be called tellurium(VI) hydroxide.
Telluric acid is a weak acid which is dibasic, forming tellurate salts with strong bases and hydrogen tellurate salts with weaker bases or upon hydrolysis of tellurates in water. It is used as tellurium-source in the synthesis of oxidation catalysts.
Preparation
Telluric acid is formed by the oxidation of tellurium or tellurium dioxide with a powerful oxidising agent such as hydrogen peroxide, chromium trioxide or sodium peroxide.
Crystallization of telluric acid solutions below 10 °C gives telluric acid tetrahydrate .
It is an oxidising agent, as shown by the electrode potential for the reaction below, although it is kinetically slow in its oxidations.
, Eo = +1.02 V
Chlorine, by comparison, is +1.36 V and selenous acid is +0.74 V in oxidizing conditions.
Properties and reactions
The anhydrous acid is stable in air at 100 °C but above this it dehydrates to form polymetatelluric acid, a white hygroscopic powder (approximate composition ), and allotelluric acid, an acid syrup of unknown structure (approximate composition ).
Typical salts of the acid contains the anions and . The presence of the tellurate ion has been confirmed in the solid state structure of .
Strong heating at over 300 °C produces the α crystalline modification of tellurium trioxide, α-.
Reaction with diazomethane gives the hexamethyl ester, .
Telluric acid and its salts mostly contain hexacoordinate tellurium. This is true even for salts such as magnesium tellurate, , which is isostructural with magnesium molybdate and contains octahedra.
Other forms of telluric acid
Metatelluric acid, , the tellurium analogue of sulfuric acid, , is unknown. Allotelluric acid of approximate composition , is not well characterised and may be a mixture of and .
Other tellurium acids
Tellurous acid , containing tellurium in its +4 oxidation state, is known but not well characterised.
Hydrogen telluride is an unstable gas that forms hydrotelluric acid upon addition to water.
References
Hydroxides
Tellurates
Oxidizing acids
Chalcogen oxoacids | Telluric acid | [
"Chemistry"
] | 619 | [
"Acids",
"Hydroxides",
"Oxidizing agents",
"Oxidizing acids",
"Bases (chemistry)"
] |
2,263,256 | https://en.wikipedia.org/wiki/International%20Water%20Management%20Institute | The International Water Management Institute (IWMI) is a non-profit international water management research organisation under the CGIAR with its headquarters in Colombo, Sri Lanka, and offices across Africa and Asia. Research at the Institute focuses on improving how water and land resources are managed, with the aim of underpinning food security and reducing poverty while safeguarding the environment.
Its research focuses on: water availability and access, including adaptation to climate change; how water is used and how it can be used more productively; water quality and its relationship to health and the environment; and how societies govern their water resources. In 2012, IWMI was awarded the prestigious Stockholm Water Prize Laureate by Stockholm International Water Institute for its pioneering research, which has helped to improve agricultural water management, enhance food security, protect environmental health and alleviate poverty in developing countries.
IWMI is a member of CGIAR, a global research partnership that unites organizations engaged in research for sustainable development, and leads the CGIAR Research Program on Water, Land and Ecosystems. IWMI is also a partner in the CGIAR Research Programs on: Aquatic Agricultural Systems (AAS); Climate Change, Agriculture and Food Security (CCAFS); Dryland Systems; and Integrated Systems for the Humid Tropics.
History
Early focus on irrigation
The institute was founded under the name International Irrigation Management Institute (IIMI) in 1985 by the Ford Foundation and the Government of Sri Lanka, supported by the Consultative Group on International Agricultural Research and the World Bank. During the Green Revolution of the 1940s to 1970s, billions of dollars had been spent building large-scale irrigation systems. These contributed, along with new fertilizers, pesticides and high-yielding varieties of seeds, to helping many countries produce greater quantities of food crops. By the mid-1980s, however, these irrigation systems were no longer performing efficiently; IIMI's job was to find out why.
IIMI's researchers discovered that problems affecting irrigation were often more institutional than technical. It advocated ‘Participatory Irrigation Management’ (PIM) as the solution, an approach that sought to involve farmers in water management decisions. In 1992, the Rio de Janeiro Earth Summit gave credence to this approach by recommending that water management be decentralized, with farmers and other stakeholders playing a more important role in managing natural resources. Initially met with resistance, PIM went on to become the status quo for governments and major lending agencies. IIMI became a member of the CGIAR system in 1991.
Wider perspective
By the mid-1990s, competition for water resources was rising, thanks to a larger global population, expanding cities and increasing industrial applications. Viewing irrigation in isolation was no longer relevant to the global situation. A new approach was needed that would consider it within a river basin context, encompassing competing users and the environment. IIMI began developing new fields of research, on topics such as open and closed basins, water accounting, multiple-use systems, basin institutions, remote sensing analysis and environmental flows. In 1998, its name changed to the International Water Management Institute (IWMI), reflecting this new wider approach.
Although it was becoming evident that water could no longer be considered an "infinite resource", as had been the case in the 1950s when there were fewer people on the planet, no one knew just how scarce the resource was. This prompted IWMI to try to find out. Its research culminated in the publication of Water for food, Water for life: A comprehensive assessment of water management in agriculture. A map within the report showed that a third of the world's population already suffered from ‘water scarcity’. The report defined physical water scarcity, as being where there are insufficient water resources to meet the demands of the population, and economic water scarcity as where water requirements are not satisfied because of a lack of investment in water or human capacity.
Averting a global water crisis
IWMI's approach towards defining water scarcity provided a new context within which the scientific debate on water availability subsequently became centred. For example, the theme of the UN World Water Day in 2007 was Coping with Water Scarcity; The USA's Worldwatch Institute featured a chapter on water management in its assessment State of the World 2008; and reports published in 2009 by the World Economic Forum and UNESCO concluded that water scarcity is now a bigger threat than the global financial crisis. Dr. Rajendra K. Pachauri, Chair of the Intergovernmental Panel on Climate Change, also highlighted water scarcity at the 2009 Nobel Conference.
If current trends continue, global annual water usage is set to increase by more than two trillion cubic metres by 2030, rising to 6.9 trillion cubic metres. That equates to 40 per cent more than can be provided by available water supplies. At Stockholm World Water Week 2010, IWMI highlighted a six-point plan for averting a water crisis. According to the institute, the following actions are required: 1) gather high-quality data about water resources; 2) take better care of the environment; 3) reform how water resources are governed; 4) revitalize how water is used for farming; 5) better manage urban and municipal demands for water; and 6) involve marginalized people in water management.
In 2011, IWMI celebrated its 25th anniversary by commissioning a series of essays on agricultural and development.
Using water management to reduce poverty
IWM's work in Gujarat, India, exemplifies how improving water management can have an influence on peoples' livelihoods. The state faced the dual problem of bankrupt electricity utilities and depleted groundwater storage following the introduction of electricity subsidies to farmers from around 1970. The situation arose because the subsidies enabled farmers to easily pump groundwater from ever-increasing depths. The Asian Development Bank and World Bank both indicated that governments should cut the electricity subsidies and charge farmers based on metered consumption of power. However, when some state governments tried to do so, the farmers formed such powerful lobbies that several chief ministers lost their seats. A different solution was clearly required.
IWMI scientists who studied the problem suggested governments should introduce ‘intelligent rationing’ of farm power supply by separating the power cables carrying electricity to farmers from those supplying other rural users, such as domestic households and industries. They should then provide farmers with a high-quality power supply for a set number of hours each day at a price they could afford. Eventually Gujarat decided to include these recommendations in a larger programme to reform the electricity utility. A study conducted afterwards found its impacts to be much greater than anticipated. Prior to the change, tube-well owners had been holding rural communities to ransom by ‘stealing’ power for irrigation. After the cables were separated, rural households, schools and industries had a much higher-quality power supply, which in turn boosted individuals’ well-being.
See also
Environmental impact of irrigation
References
External links
International Water Management Institute
International Water Management Institute Publications
The World Bank's strategy, work and publications on water resources
International research institutes
Research institutes in Sri Lanka
Water and politics
Water and the environment
Water management
Water organizations
Water supply | International Water Management Institute | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,447 | [
"Hydrology",
"Water supply",
"Environmental engineering"
] |
2,263,904 | https://en.wikipedia.org/wiki/Carbon%20footprint | A carbon footprint (or greenhouse gas footprint) is a calculated value or index that makes it possible to compare the total amount of greenhouse gases that an activity, product, company or country adds to the atmosphere. Carbon footprints are usually reported in tonnes of emissions (CO2-equivalent) per unit of comparison. Such units can be for example tonnes CO2-eq per year, per kilogram of protein for consumption, per kilometer travelled, per piece of clothing and so forth. A product's carbon footprint includes the emissions for the entire life cycle. These run from the production along the supply chain to its final consumption and disposal.
Similarly, an organization's carbon footprint includes the direct as well as the indirect emissions that it causes. The Greenhouse Gas Protocol (for carbon accounting of organizations) calls these Scope 1, 2 and 3 emissions. There are several methodologies and online tools to calculate the carbon footprint. They depend on whether the focus is on a country, organization, product or individual person. For example, the carbon footprint of a product could help consumers decide which product to buy if they want to be climate aware. For climate change mitigation activities, the carbon footprint can help distinguish those economic activities with a high footprint from those with a low footprint. So the carbon footprint concept allows everyone to make comparisons between the climate impacts of individuals, products, companies and countries. It also helps people devise strategies and priorities for reducing the carbon footprint.
The carbon dioxide equivalent (CO2eq) emissions per unit of comparison is a suitable way to express a carbon footprint. This sums up all the greenhouse gas emissions. It includes all greenhouse gases, not just carbon dioxide. And it looks at emissions from economic activities, events, organizations and services. In some definitions, only the carbon dioxide emissions are taken into account. These do not include other greenhouse gases, such as methane and nitrous oxide.
Various methods to calculate the carbon footprint exist, and these may differ somewhat for different entities. For organizations it is common practice to use the Greenhouse Gas Protocol. It includes three carbon emission scopes. Scope 1 refers to direct carbon emissions. Scope 2 and 3 refer to indirect carbon emissions. Scope 3 emissions are those indirect emissions that result from the activities of an organization but come from sources which they do not own or control.
For countries it is common to use consumption-based emissions accounting to calculate their carbon footprint for a given year. Consumption-based accounting using input-output analysis backed by super-computing makes it possible to analyse global supply chains. Countries also prepare national GHG inventories for the UNFCCC. The GHG emissions listed in those national inventories are only from activities in the country itself. This approach is called territorial-based accounting or production-based accounting. It does not take into account production of goods and services imported on behalf of residents. Consumption-based accounting does reflect emissions from goods and services imported from other countries.
Consumption-based accounting is therefore more comprehensive. This comprehensive carbon footprint reporting including Scope 3 emissions deals with gaps in current systems. Countries' GHG inventories for the UNFCCC do not include international transport. Comprehensive carbon footprint reporting looks at the final demand for emissions, to where the consumption of the goods and services takes place.
Definition
A formal definition of carbon footprint is as follows: "A measure of the total amount of carbon dioxide (CO2) and methane (CH4) emissions of a defined population, system or activity, considering all relevant sources, sinks and storage within the spatial and temporal boundary of the population, system or activity of interest. Calculated as carbon dioxide equivalent using the relevant 100-year global warming potential (GWP100)."
Scientists report carbon footprints in terms of equivalents of tonnes of CO2 emissions (CO2-equivalent). They may report them per year, per person, per kilogram of protein, per kilometer travelled, and so on.
In the definition of carbon footprint, some scientists include only CO2. But more commonly they include several of the notable greenhouse gases. They can compare various greenhouse gases by using carbon dioxide equivalents over a relevant time scale, like 100 years. Some organizations use the term greenhouse gas footprint or climate footprint to emphasize that all greenhouse gases are included, not just carbon dioxide.
The Greenhouse Gas Protocol includes all of the most important greenhouse gases. "The standard covers the accounting and reporting of seven greenhouse gases covered by the Kyoto Protocol – carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O), hydrofluorocarbons (HFCs), perfluorocarbons (PCFs), sulfur hexafluoride (SF6) and nitrogen trifluoride (NF3)."
In comparison, the IPCC definition of carbon footprint in 2022 covers only carbon dioxide. It defines the carbon footprint as the "measure of the exclusive total amount of emissions of carbon dioxide (CO2) that is directly and indirectly caused by an activity or is accumulated over the lifecycle stages of a product." The IPCC report's authors adopted the same definition that had been proposed in 2007 in the UK. That publication included only carbon dioxide in the definition of carbon footprint. It justified this with the argument that other greenhouse gases were more difficult to quantify. This is because of their differing global warming potentials. They also stated that an inclusion of all greenhouse gases would make the carbon footprint indicator less practical. But there are disadvantages to this approach. One disadvantage of not including methane is that some products or sectors that have a high methane footprint such as livestock appear less harmful for the climate than they actually are.
Types of greenhouse gas emissions
The greenhouse gas protocol is a set of standards for tracking greenhouse gas emissions. The standards divide emissions into three scopes (Scope 1, 2 and 3) within the value chain. Greenhouse gas emissions caused directly by the organization such as by burning fossil fuels are referred to as Scope 1. Emissions caused indirectly by an organization, such as by purchasing secondary energy sources like electricity, heat, cooling or steam are called Scope 2. Lastly, indirect emissions associated with upstream or downstream processes are called Scope 3.
Direct carbon emissions (Scope 1)
Direct or Scope 1 carbon emissions come from sources on the site that is producing a product or delivering a service. An example for industry would be the emissions from burning a fuel on site. On the individual level, emissions from personal vehicles or gas-burning stoves are Scope 1.
Indirect carbon emissions (Scope 2)
Indirect carbon emissions are emissions from sources upstream or downstream from the process being studied. They are also known as Scope 2 or Scope 3 emissions.
Scope 2 emissions are the indirect emissions related to purchasing electricity, heat, or steam used on site. Examples of upstream carbon emissions include transportation of materials and fuels, any energy used outside of the production facility, and waste produced outside the production facility. Examples of downstream carbon emissions include any end-of-life process or treatments, product and waste transportation, and emissions associated with selling the product. The GHG Protocol says it is important to calculate upstream and downstream emissions. There could be some double counting. This is because upstream emissions of one person's consumption patterns could be someone else's downstream emissions
Other indirect carbon emissions (Scope 3)
Scope 3 emissions are all other indirect emissions derived from the activities of an organization. But they are from sources they do not own or control. The GHG Protocol's Corporate Value Chain (Scope 3) Accounting and Reporting Standard allows companies to assess their entire value chain emissions impact and identify where to focus reduction activities.
Scope 3 emission sources include emissions from suppliers and product users. These are also known as the value chain. Transportation of good, and other indirect emissions are also part of this scope. In 2022 about 30% of US companies reported Scope 3 emissions. The International Sustainability Standards Board is developing a recommendation to include Scope 3 emissions in all GHG reporting.
Purpose and strengths
The current rise in global average temperature is more rapid than previous changes. It is primarily caused by humans burning fossil fuels. The increase in greenhouse gases in the atmosphere is also due to deforestation and agricultural and industrial practices. These include cement production. The two most notable greenhouse gases are carbon dioxide and methane. Greenhouse gas emissions, and hence humanity's carbon footprint, have been increasing during the 21st century. The Paris Agreement aims to reduce greenhouse gas emissions enough to limit the rise in global temperature to no more than 1.5°C above pre-industrial levels.
The carbon footprint concept makes comparisons between the climate impacts of individuals, products, companies and countries. A carbon footprint label on products could enable consumers to choose products with a lower carbon footprint if they want to help limit climate change. For meat products, as an example, such a label could make it clear that beef has a higher carbon footprint than chicken.
Understanding the size of an organization's carbon footprint makes it possible to devise a strategy to reduce it. For most businesses the vast majority of emissions do not come from activities on site, known as Scope 1, or from energy supplied to the organization, known as Scope 2, but from Scope 3 emissions, the extended upstream and downstream supply chain. Therefore, ignoring Scope 3 emissions makes it impossible to detect all emissions of importance, which limits options for mitigation. Large companies in sectors such as clothing or automobiles would need to examine more than 100,000 supply chain pathways to fully report their carbon footprints.
The importance of displacement of carbon emissions has been known for some years. Scientists also call this carbon leakage. The idea of a carbon footprint addresses concerns of carbon leakage which the Paris Agreement does not cover. Carbon leakage occurs when importing countries outsource production to exporting countries. The outsourcing countries are often rich countries while the exporters are often low-income countries. Countries can make it appear that their GHG emissions are falling by moving "dirty" industries abroad, even if their emissions could be increasing when looked at from a consumption perspective.
Carbon leakage and related international trade have a range of environmental impacts. These include increased air pollution, water scarcity, biodiversity loss, raw material usage, and energy depletion.
Scholars have argued in favour of using both consumption-based and production-based accounting. This helps establish shared producer and consumer responsibility. Currently countries report on their annual GHG inventory to the UNFCCC based on their territorial emissions. This is known as the territorial-based or production-based approach. Including consumption-based calculations in the UNFCCC reporting requirements would help close loopholes by addressing the challenge of carbon leakage.
The Paris Agreement currently does not require countries to include in their national totals GHG emissions associated with international transport. These emissions are reported separately. They are not subject to the limitation and reduction commitments of Annex 1 Parties under the Climate Convention and Kyoto Protocol. The carbon footprint methodology includes GHG emissions associated with international transport, thereby assigning emissions caused by international trade to the importing country.
Underlying concepts for calculations
The calculation of the carbon footprint of a product, service or sector requires expert knowledge and careful examination of what is to be included. Carbon footprints can be calculated at different scales. They can apply to whole countries, cities, neighborhoods and also sectors, companies and products. Several free online carbon footprint calculators exist to calculate personal carbon footprints.
Software such as the "Scope 3 Evaluator" can help companies report emissions throughout their value chain. The software tools can help consultants and researchers to model global sustainability footprints. In each situation there are a number of questions that need to be answered. These include which activities are linked to which emissions, and which proportion should be attributed to which company. Software is essential for company management. But there is a need for new ways of enterprise resource planning to improve corporate sustainability performance.
To achieve 95% carbon footprint coverage, it would be necessary to assess 12 million individual supply-chain contributions. This is based on analyzing 12 sectoral case studies. The Scope 3 calculations can be made easier using input-output analysis. This is a technique originally developed by Nobel Prize-winning economist Wassily Leontief.
Consumption-based emission accounting based on input-output analysis
Consumption-based emission accounting traces the impacts of demand for goods and services along the global supply chain to the end-consumer. It is also called consumption-based carbon accounting. In contrast, a production-based approach to calculating GHG emissions is not a carbon footprint analysis. This approach is also called a territorial-based approach. The production-based approach includes only impacts physically produced in the country in question. Consumption-based accounting redistributes the emissions from production-based accounting. It considers that emissions in another country are necessary for the home country's consumption bundle.
Consumer-based accounting is based on input-output analysis. It is used at the highest levels for any economic research question related to environmental or social impacts. Analysis of global supply chains is possible using consumption-based accounting with input-output analysis assisted by super-computing capacity.
Leontief created Input-output analysis (IO) to demonstrate the relationship between consumption and production in an economy. It incorporates the entire supply chain. It uses input-output tables from countries' national accounts. It also uses international data such as UN Comtrade and Eurostat. Input-output analysis has been extended globally to multi-regional input-output analysis (MRIO). Innovations and technology enabling the analysis of billions of supply chains made this possible. Standards set by the United Nations underpin this analysis. The analysis enables a Structural Path Analysis. This scans and ranks the top supply chain nodes and paths. It conveniently lists hotspots for urgent action. Input-output analysis has increased in popularity because of its ability to examine global value chains.
Combination with life cycle analysis (LCA)
Life cycle assessment (LCA) is a methodology for assessing all environmental impacts associated with the life cycle of a commercial product, process, or service. It is not limited to the greenhouse gas emissions. It is also called life cycle analysis. It includes water pollution, air pollution, ecotoxicity and similar types of pollution. Some widely recognized procedures for LCA are included in the ISO 14000 series of environmental management standards. A standard called ISO 14040:2006 provides the framework for conducting an LCA study. ISO 14060 family of standards provides further sophisticated tools. These are used to quantify, monitor, report and validate or verify GHG emissions and removals.
Greenhouse gas product life cycle assessments can also comply with specifications such as Publicly Available Specification (PAS) 2050 and the GHG Protocol Life Cycle Accounting and Reporting Standard.
An advantage of LCA is the high level of detail that can be obtained on-site or by liaising with suppliers. However, LCA has been hampered by the artificial construction of a boundary after which no further impacts of upstream suppliers are considered. This can introduce significant truncation errors. LCA has been combined with input-output analysis. This enables on-site detailed knowledge to be incorporated. IO connects to global economic databases to incorporate the entire supply chain.
Problems
Shifting responsibility from corporations to individuals
Critics argue that the original aim of promoting the personal carbon footprint concept was to shift responsibility away from corporations and institutions and on to personal lifestyle choices. The fossil fuel company BP ran a large advertising campaign for the personal carbon footprint in 2005 which helped popularize this concept. This strategy, employed by many major fossil fuel companies, has been criticized for trying to shift the blame for negative consequences of those industries on to individual choices.
Geoffrey Supran and Naomi Oreskes of Harvard University argue that concepts such as carbon footprints "hamstring us, and they put blinders on us, to the systemic nature of the climate crisis and the importance of taking collective action to address the problem".
Relationship with other environmental impacts
A focus on carbon footprints can lead people to ignore or even exacerbate other related environmental issues of concern. These include biodiversity loss, ecotoxicity, and habitat destruction. It may not be easy to measure these other human impacts on the environment with a single indicator like the carbon footprint. Consumers may think that the carbon footprint is a proxy for environmental impact. In many cases this is not correct. There can be trade-offs between reducing carbon footprint and environmental protection goals. One example is the use of biofuel, a renewable energy source and can reduce the carbon footprint of energy supply but can also pose ecological challenges during its production. This is because it is often produced in monocultures with ample use of fertilizers and pesticides. Another example is offshore wind parks, which could have unintended impacts on marine ecosystems.
The carbon footprint analysis solely focuses on greenhouse gas emissions, unlike a life-cycle assessment which is much broader and looks at all environmental impacts. Therefore, it is useful to stress in communication activities that the carbon footprint is just one in a family of indicators (e.g. ecological footprint, water footprint, land footprint, and material footprint), and should not be looked at in isolation. In fact, carbon footprint can be treated as one component of ecological footprint.
The "Sustainable Consumption and Production Hotspot Analysis Tool" (SCP-HAT) is a tool to place carbon footprint analysis into a wider perspective. It includes a number of socio-economic and environmental indicators. It offers calculations that are either consumption-based, following the carbon footprint approach, or production-based. The database of the SCP-HAT tool is underpinned by input–output analysis. This means it includes Scope 3 emissions. The IO methodology is also governed by UN standards. It is based on input-output tables of countries' national accounts and international trade data such as UN Comtrade, and therefore it is comparable worldwide.
Differing boundaries for calculations
The term carbon footprint has been applied to limited calculations that do not include Scope 3 emissions or the entire supply chain. This can lead to claims of misleading customers with regards to the real carbon footprints of companies or products.
Reported values
Greenhouse gas emissions overview
By products
The Carbon Trust has worked with UK manufacturers to produce "thousands of carbon footprint assessments". As of 2014 the Carbon Trust state they have measured 28,000 certifiable product carbon footprints.
Food
Plant-based foods tend to have a lower carbon footprint than meat and dairy. In many cases a much smaller footprint. This holds true when comparing the footprint of foods in terms of their weight, protein content or calories. The protein output of peas and beef provides an example. Producing 100 grams of protein from peas emits just 0.4 kilograms of carbon dioxide equivalents (CO2eq). To get the same amount of protein from beef, emissions would be nearly 90 times higher, at 35 kgCO2eq. Only a small fraction of the carbon footprint of food comes from transport and packaging. Most of it comes from processes on the farm, or from land use change. This means the choice of what to eat has a larger potential to reduce carbon footprint than how far the food has traveled, or how much packaging it is wrapped in.
By sector
The IPCC Sixth Assessment Report found that global GHG emissions have continued to rise across all sectors. Global consumption was the main cause. The most rapid growth was in transport and industry. A key driver of global carbon emissions is affluence. The IPCC noted that the wealthiest 10% in the world contribute between about one third to one half (36%–45%) of global GHG emissions. Researcheres have previously found that affluence is the key driver of carbon emissions. It has a bigger impact than population growth. And it counters the effects of technological developments. Continued economic growth mirrors the increasing trend in material extraction and GHG emissions. “Industrial emissions have been growing faster since 2000 than emissions in any other sector, driven by increased basic materials extraction and production,” the IPCC said.
Transport
There can be wide variations in emissions for transport of people. This is due to various factors. They include the length of the trip, the source of electricity in the local grid and the occupancy of public transport. In the case of driving the type of vehicle and number of passengers are factors. Over short to medium distances, walking or cycling are nearly always the lowest carbon way to travel. The carbon footprint of cycling one kilometer is usually in the range of 16 to 50 grams CO2eq per km. For moderate or long distances, trains nearly always have a lower carbon footprint than other options.
By organization
Carbon accounting
By country
CO2 emissions of countries are typically measured on the basis of production. This accounting method is sometimes referred to as territorial emissions. Countries use it when they report their emissions, and set domestic and international targets such as Nationally Determined Contributions. Consumption-based emissions on the other hand are adjusted for trade. To calculate consumption-based emissions analysts have to track which goods are traded across the world. Whenever a product is imported, all CO2 emissions that were emitted in the production of that product are included. Consumption-based emissions reflect the lifestyle choices of a country's citizens.
According to the World Bank, the global average carbon footprint in 2014 was about 5 tonnes of CO2 per person, measured on a production basis. The EU average for 2007 was about 13.8 tonnes CO2e per person. For the USA, Luxembourg and Australia it was over 25 tonnes CO2e per person. In 2017, the average for the USA was about 20 metric tonnes CO2e per person. This is one of the highest per capita figures in the world.
The footprints per capita of countries in Africa and India were well below average. Per capita emissions in India are low for its huge population. But overall the country is the third largest emitter of CO2 and fifth largest economy by nominal GDP in the world. Assuming a global population of around 9–10 billion by 2050, a carbon footprint of about 2–2.5 tonnes CO2e per capita is needed to stay within a 2 °C target. These carbon footprint calculations are based on a consumption-based approach using a Multi-Regional Input-Output (MRIO) database. This database accounts for all greenhouse gas (GHG) emissions in the global supply chain and allocates them to the final consumer of the purchased commodities.
Reducing the carbon footprint
Climate change mitigation
Efforts to reduce the carbon footprint of products, services and organizations help limit climate change. Such activities are called climate change mitigation.
Reducing industry's carbon footprint
Carbon offsetting can reduce a company's overall carbon footprint by providing it with a carbon credit. This compensates the company for carbon dioxide emissions by recognizing an equivalent reduction of carbon dioxide in the atmosphere. Reforestation, or restocking existing forests that have previously been depleted, is an example of carbon offsetting.
A carbon footprint study can identify specific and critical areas for improvement. It uses input-output analysis and scrutinizes the entire supply chain. Such an analysis could be used to eliminate the supply chains with the highest greenhouse gas emissions.
History
The term carbon footprint was first used in a BBC vegetarian food magazine in 1999, though the broader concept of ecological footprint, which encompasses the carbon footprint, had been used since at least 1992, as also chronicled by William Safire in the New York Times.
In 2005, fossil fuel company BP hired the large advertising campaign Ogilvy to popularize the idea of a carbon footprint for individuals. The campaign instructed people to calculate their personal footprints and provided ways for people to "go on a low-carbon diet".
The carbon footprint is derived from the ecological footprint, which encompasses carbon emissions. The carbon footprint follows the logic of ecological footprint accounting, which tracks the resource use embodied in consumption, whether it is a product, an individual, a city, or a country. While in the ecological footprint, carbon emissions are translated into areas needed to absorb the carbon emissions, the carbon footprint on its own is expressed in the weight of carbon emissions per time unit. William Rees wrote the first academic publication about ecological footprints in 1992. Other related concepts from the 1990s are the "ecological backpack" and material input per unit of service (MIPS).
Trends and similar concepts
The International Sustainability Standards Board (ISSB) aims to bring global, rigorous oversight to carbon footprint reporting. It was formed out of the International Financial Reporting Standards. It will require companies to report on their Scope 3 emissions. The ISSB has taken on board criticisms of other initiatives in its aims for universality. It consolidates the Carbon Disclosure Standards Board, the Sustainability Accounting Standards Board and the Value Reporting Foundation. It complements the Global Reporting Initiative. It is influenced by the Task Force on Climate-Related Financial Disclosures. As of early 2023, Great Britain and Nigeria were preparing to adopt these standards.
The concept of total equivalent warming impact (TEWI) is the most used index for carbon dioxide equivalent (CO2) emissions calculation in air conditioning and refrigeration sectors by including both the direct and indirect contributions since it evaluates the emissions caused by the operating lifetime of systems. The Expanded Total Equivalent Warming Impact method has been used for an accurate evaluation of refrigerators emissions.
See also
Carbon emission
Carbon intensity
Carbon neutrality
Ecological footprint
Embedded emissions
Food miles
Greenhouse gas inventory
Individual action on climate change
Life-cycle greenhouse gas emissions of energy sources
Zero-carbon city
References
External links
The GHG Protocol
Environmental impact of the energy industry
Greenhouse gas emissions
Environmental indices
Environmental terminology
Articles containing video clips | Carbon footprint | [
"Chemistry"
] | 5,190 | [
"Greenhouse gases",
"Greenhouse gas emissions"
] |
2,263,974 | https://en.wikipedia.org/wiki/Bouveault%E2%80%93Blanc%20reduction | The Bouveault–Blanc reduction is a chemical reaction in which an ester is reduced to primary alcohols using absolute ethanol and sodium metal. It was first reported by Louis Bouveault and Gustave Louis Blanc in 1903. Bouveault and Blanc demonstrated the reduction of ethyl oleate and n-butyl oleate to oleyl alcohol. Modified versions of which were subsequently refined and published in Organic Syntheses.
This reaction is used commercially although for laboratory scale reactions it was made obsolete by the introduction of lithium aluminium hydride.
Reaction mechanism
Sodium metal is a one-electron reducing agent. Four equivalents of sodium are required to fully reduce each ester, although two more equivalents are typically consumed in deprotonating the product alcohols to alkoxides. Ethanol serves as a proton source. The reaction produces sodium alkoxides, according to the following stoichiometry:
+ 6 Na + 4 → + + 4
In practice, considerable sodium is consumed by the formation of hydrogen. For this reason, an excess of sodium is often required. Because the hydrolysis of sodium is rapid, not to mention dangerous, the Bouveault-Blanc reaction requires anhydrous ethanol and can give low yields with insufficiently dry ethanol. The mechanism of the reaction follows:
Consistent with this mechanism, sodium-ethanol mixtures will also reduce ketones to alcohols.
This approach to reducing esters was widely used prior to the availability of hydride reducing agents such as lithium aluminium hydride and related reagents. It requires vigorous reaction conditions and has a significant risk of fires, explaining its relative unpopularity. One modification involves encapsulating the alkali metal into a silica gel, which has a safety and yield profile similar to that of hydride reagents. Another modification uses a sodium dispersion.
See also
Acyloin condensation – The reductive coupling of esters, using sodium, to yield an α-hydroxyketone
Akabori amino-acid reaction – The reduction of amino acid esters, by sodium, to yield aldehydes
Birch reduction – For the reduction of alkenes using sodium
Bouveault aldehyde synthesis – Another organometallic reaction by Bouveault where a Grignard reagent is converted to an aldehyde
References
External links
Animation of the Bouveault–Blanc reduction
Free radical reactions
Organic redox reactions
Name reactions | Bouveault–Blanc reduction | [
"Chemistry"
] | 499 | [
"Name reactions",
"Free radical reactions",
"Organic redox reactions",
"Organic reactions"
] |
2,264,346 | https://en.wikipedia.org/wiki/Electron%20configurations%20of%20the%20elements%20%28data%20page%29 | This page shows the electron configurations of the neutral gaseous atoms in their ground states. For each atom the subshells are given first in concise form, then with all subshells written out, followed by the number of electrons per shell. For phosphorus (element 15) as an example, the concise form is [Ne] 3s2 3p3. Here [Ne] refers to the core electrons which are the same as for the element neon (Ne), the last noble gas before phosphorus in the periodic table. The valence electrons (here 3s2 3p3) are written explicitly for all atoms.
Electron configurations of elements beyond hassium (element 108) have never been measured; predictions are used below.
As an approximate rule, electron configurations are given by the Aufbau principle and the Madelung rule. However there are numerous exceptions; for example the lightest exception is chromium, which would be predicted to have the configuration , written as , but whose actual configuration given in the table below is .
Note that these electron configurations are given for neutral atoms in the gas phase, which are not the same as the electron configurations for the same atoms in chemical environments. In many cases, multiple configurations are within a small range of energies and the irregularities shown below do not necessarily have a clear relation to chemical behaviour. For the undiscovered eighth-row elements, mixing of configurations is expected to be very important, and sometimes the result can no longer be well-described by a single configuration.
See also
Extended periodic table#Electron configurations – Predictions for undiscovered elements 119–173 and 184
References
All sources concur with the data above except in the instances listed separately:
NIST
http://physics.nist.gov/PhysRefData/IonEnergy/ionEnergy.html ; retrieved July 2005, (elements 1–104) based on:
Atomic Spectroscopy, by W.C. Martin and W.L. Wiese in Atomic, Molecular, & Optical Physics Handbook, ed. by G.W.F. Drake (AIP, Woodbury, NY, 1996) Chapter 10, pp. 135–153.
This website is also cited in the CRC Handbook as source of Section 1, subsection Electron Configuration of Neutral Atoms in the Ground State.
91 Pa : [Rn] 5f2(3H4) 6d 7s2
92 U : [Rn] 5f3(4Io9/2) 6d 7s2
93 Np : [Rn] 5f4(5I4) 6d 7s2
103 Lr : [Rn] 5f14 7s2 7p1 question-marked
104 Rf : [Rn] 5f14 6d2 7s2 question-marked
CRC
David R. Lide (ed), CRC Handbook of Chemistry and Physics, 84th Edition, online version. CRC Press. Boca Raton, Florida, 2003; Section 1, Basic Constants, Units, and Conversion Factors; Electron Configuration of Neutral Atoms in the Ground State. (elements 1–104)
Also subsection Periodic Table of the Elements, (elements 1–103) based on:
G. J. Leigh, Editor, Nomenclature of Inorganic Chemistry, Blackwell Scientific Publications, Oxford, 1990.
Chemical and Engineering News, 63(5), 27, 1985.
Atomic Weights of the Elements, 1999, Pure Appl. Chem., 73, 667, 2001.
WebElements
http://www.webelements.com/ ; retrieved July 2005, electron configurations based on:
Atomic, Molecular, & Optical Physics Handbook, Ed. Gordon W. F. Drake, American Institute of Physics, Woodbury, New York, 1996.
J.E. Huheey, E.A. Keiter, and R.L. Keiter in Inorganic Chemistry : Principles of Structure and Reactivity, 4th edition, Harper Collins, New York, 1993.
R.L. DeKock and H.B. Gray in Chemical Structure and bonding, Benjamin/Cummings, Menlo Park, California, 1980.
A.M. James and M.P. Lord in Macmillan's Chemical and Physical Data, Macmillan, London, UK, 1992.
103 Lr : [Rn].5f14.7s2.7p1 tentative ; 2.8.18.32.32.9.2 [inconsistent]
104 Rf : [Rn].5f14.6d2.7s2 tentative
105 Db : [Rn].5f14.6d3.7s2 (a guess based upon that of tantalum) ; 2.8.18.32.32.11.2
106 Sg : [Rn].5f14.6d4.7s2 (a guess based upon that of tungsten) ; 2.8.18.32.32.12.2
107 Bh : [Rn].5f14.6d5.7s2 (a guess based upon that of rhenium) ; 2.8.18.32.32.13.2
108 Hs : [Rn].5f14.6d6.7s2 (a guess based upon that of osmium) ; 2.8.18.32.32.14.2
109 Mt : [Rn].5f14.6d7.7s2 (a guess based upon that of iridium) ; 2.8.18.32.32.15.2
110 Ds : [Rn].5f14.6d9.7s1 (a guess based upon that of platinum) ; 2.8.18.32.32.17.1
111 Rg : [Rn].5f14.6d10.7s1 (a guess based upon that of gold) ; 2.8.18.32.32.18.1
112 Cn : [Rn].5f14.6d10.7s2 (a guess based upon that of mercury) ; 2.8.18.32.32.18.2
113 Nh : [Rn].5f14.6d10.7s2.7p1 (a guess based upon that of thallium) ; 2.8.18.32.32.18.3
114 Fl : [Rn].5f14.6d10.7s2.7p2 (a guess based upon that of lead) ; 2.8.18.32.32.18.4
115 Mc : [Rn].5f14.6d10.7s2.7p3 (a guess based upon that of bismuth) ; 2.8.18.32.32.18.5
116 Lv : [Rn].5f14.6d10.7s2.7p4 (a guess based upon that of polonium) ; 2.8.18.32.32.18.6
117 Ts : [Rn].5f14.6d10.7s2.7p5 (a guess based upon that of astatine) ; 2.8.18.32.32.18.7
118 Og : [Rn].5f14.6d10.7s2.7p6 (a guess based upon that of radon) ; 2.8.18.32.32.18.8
Lange
J.A. Dean (ed), Lange's Handbook of Chemistry (15th Edition), online version, McGraw-Hill, 1999; Section 4, Table 4.1 Electronic Configuration and Properties of the Elements. (Elements 1–103)
97 Bk : [Rn] 5f8 6d 7s2
103 Lr : [Rn] 4f14 [sic] 6d 7s2
Hill and Petrucci
Hill and Petrucci, General Chemistry: An Integrated Approach (3rd edition), Prentice Hall. (Elements 1–106)
58 Ce : [Xe] 4f2 6s2
103 Lr : [Rn] 5f14 6d1 7s2
104 Rf : [Rn] 5f14 6d2 7s2 (agrees with guess above)
105 Db : [Rn] 5f14 6d3 7s2
106 Sg : [Rn] 5f14 6d4 7s2
Hoffman, Lee, and Pershina
This book contains predicted electron configurations for the elements up to 172, as well as 184, based on relativistic Dirac–Fock calculations by B. Fricke in
Chemical element data pages | Electron configurations of the elements (data page) | [
"Physics",
"Chemistry"
] | 1,791 | [
"Chemical element data pages",
"Atoms",
"Matter",
"Chemical data pages"
] |
2,265,023 | https://en.wikipedia.org/wiki/Field%20emission%20gun | A field emission gun (FEG) is a type of electron gun in which a sharply pointed Müller-type emitter is held at several kilovolts negative potential relative to a nearby electrode, so that there is sufficient potential gradient at the emitter surface to cause field electron emission. Emitters are either of cold-cathode type, usually made of single crystal tungsten sharpened to a tip radius of about 100 nm, or of the Schottky type, in which thermionic emission is enhanced by barrier lowering in the presence of a high electric field. Schottky emitters are made by coating a tungsten tip with a layer of zirconium oxide (ZrO2) decreasing the work function of the tip by approximately 2.7 eV.
In electron microscopes, a field emission gun is used to produce an electron beam that is smaller in diameter, more coherent and with up to three orders of magnitude greater current density or brightness than can be achieved with conventional thermionic emitters such as tungsten or lanthanum hexaboride ()-tipped filaments. The result in both scanning and transmission electron microscopy is significantly improved signal-to-noise ratio and spatial resolution, and greatly increased emitter life and reliability compared with thermionic devices.
References
Vacuum tubes
Tungsten | Field emission gun | [
"Physics"
] | 270 | [
"Vacuum tubes",
"Vacuum",
"Matter"
] |
2,265,029 | https://en.wikipedia.org/wiki/Ion%20wind | Ion wind, ionic wind, corona wind or electric wind is the airflow of charged particles induced by electrostatic forces linked to corona discharge arising at the tips of some sharp conductors (such as points or blades) subjected to high voltage relative to ground. Ion wind is an electrohydrodynamic phenomenon. Ion wind generators can also be considered electrohydrodynamic thrusters.
The term "ionic wind" is considered a misnomer due to misconceptions that only positive and negative ions were primarily involved in the phenomenon. A 2018 study found that electrons play a larger role than negative ions during the negative voltage period. As a result, the term "electric wind" has been suggested as a more accurate terminology.
This phenomenon is now used in an MIT ionic wind plane, the first solid-state plane, developed in 2018.
History
B. Wilson in 1750 demonstrated the recoil force associated to the same corona discharge and precursor to the ion thruster was the corona discharge pinwheel. The corona discharge from the freely rotating pinwheel arm with ends bent to sharp points gives the air a space charge, which repels the point because the polarity is the same for the point and the air.
Francis Hauksbee, curator of instruments for the Royal Society of London, made the earliest report of electric wind in 1709. Myron Robinson completed an extensive bibliography and literature review during the 1950s resurgence of interest in the phenomena.
In 2018, researchers from South Korea and Slovenia used Schlieren photography to experimentally determine that electrons and ions play an important role in generating ionic wind. The study was the first to provide direct evidence that the electrohydrodynamic force responsible for the ionic wind is caused by a charged particle drag that occurs as the electrons and ions push the neutral particles away.
In 2018, a team of MIT researchers built and successfully flew the first-ever prototype plane propelled by ionic wind, MIT EAD Airframe Version 2.
Mechanism
Net electric charges on conductors, including local charge distributions associated with dipoles, reside entirely on their external surface (see Faraday cage) and tend to concentrate more around sharp points and edges than on flat surfaces. This means that the electric field generated by charges on a sharp conductive point is much stronger than the field generated by the same charge residing on a large, smooth, spherical conductive shell. When this electric field strength exceeds what is known as the corona discharge inception voltage (CIV) gradient, it ionizes the air about the tip, and a small faint purple jet of plasma can be seen in the dark on the conductive tip. Ionization of the nearby air molecule results in the n generation of ionized air molecules having the same polarity as that of the charged tip. Subsequently, the tip repels the like-charged ion cloud, which immediately expands due to the repulsion between the ions themselves. This repulsion of ions creates an electric "wind" that emanates from the tip, usually accompanied by a hissing noise due to the change in air pressure at the tip. An opposite force acts on the tip that may recoil if not tight to the ground.
A vaneless ion wind generator performs the inverse function, using ambient wind to move ions, which are collected, yielding electrical energy.
See also
Air ioniser
Ion thruster
Ionocraft
Plasma actuator
Hall-effect thruster
Magnetohydrodynamic drive
Magnetoplasmadynamic thruster
Pulsed inductive thruster
Field-emission electric propulsion
Spacecraft propulsion
References
External links
The Man Who Mastered Gravity (Townsend Brown Biography) by Paul Schatzkin; 2023 Incorrigible Arts,
Plasma propulsion in space
Plasma phenomena
Spacecraft propulsion
Electrostatics | Ion wind | [
"Physics"
] | 749 | [
"Plasma phenomena",
"Physical phenomena",
"Plasma physics"
] |
13,606,026 | https://en.wikipedia.org/wiki/Shell%20balance | In fluid mechanics, a shell balance can be used to determine the velocity profile of a moving fluid, i.e,. how fluid velocity changes with position across a flow cross section.
A "shell" is a differential element of the flow. By looking at the momentum and forces on one small portion, it is possible to integrate over the flow to see the larger picture of the flow as a whole. The balance is determining what goes into and out of the shell. Momentum is created within the shell through fluid entering and leaving the shell and by shear stress. In addition, there are pressure and gravitational forces on the shell. From this, it is possible to find a velocity for any point across the flow.
Applications
Shell balances can be used in many situations. For example, flow in a pipe, the flow of multiple fluids around each other, or flow due to pressure difference. Although terms in the shell balance and boundary conditions will change, the basic set up and process is the same.
Requirements for shell balance calculations
The fluid must exhibit:
Laminar flow
No bends or curves
Steady state
Two boundary conditions
Boundary Conditions are used to find constants of integration.
Fluid - Solid Boundary: No-slip condition, the velocity of a liquid at a solid is equal to the velocity of the solid.
Liquid - Gas Boundary: Shear stress = 0.
Liquid - Liquid Boundary: Equal velocity and shear stress on both liquids.
Performing shell balances
A fluid is flowing between and in contact with two horizontal surfaces of contact area A. A differential shell of height Δy is utilized (see diagram below).
The top surface is moving at velocity U and the bottom surface is stationary.
Density of fluid = ρ
Viscosity of fluid = μ
Velocity in x direction = , shown by the diagonal line above. This is what a shell balance is solving for.
Conservation of momentum
(Rate of momentum in) - (rate of momentum out) + (sum of all forces) = 0
To perform a shell balance, follow the following basic steps:
Find momentum from shear stress.(Momentum from Shear Stress Into System) - (Momentum from Shear Stress Out of System). Momentum from Shear Stress goes into the shell at y and leaves the system at y + Δy. Shear stress = τyx, area = A, momentum = τyxA.
Find momentum from the flow. Momentum flows into the system at x = 0 and out at x = L. The flow is steady state. Therefore, the momentum flow at x = 0 is equal to the moment of flow at x = L. Therefore, these cancel out.
Find gravity force on the shell.
Find pressure forces.
Plug into conservation of momentum and solve for τyx.
Apply Newton's law of viscosity for a Newtonian fluidτyx = -μ(dVx/dy).
Integrate to find the equation for velocity and use Boundary Conditions to find constants of integration.
Boundary 1: Top Surface: y = 0 and Vx = U
Boundary 2: Bottom Surface: y = D and Vx = 0
Resources
Fluid mechanics | Shell balance | [
"Engineering"
] | 624 | [
"Civil engineering",
"Fluid mechanics"
] |
13,609,180 | https://en.wikipedia.org/wiki/Hantzsch%20pyrrole%20synthesis | The Hantzsch Pyrrole Synthesis, named for Arthur Rudolf Hantzsch, is the chemical reaction of β-ketoesters (1) with ammonia (or primary amines) and α-haloketones (2) to give substituted pyrroles (3).
Pyrroles are found in a variety of natural products with biological activity, so the synthesis of substituted pyrroles has important applications in medicinal chemistry. Alternative methods for synthesizing pyrroles exist, such as the Knorr Pyrrole Synthesis and Paal-Knorr Synthesis.
Mechanism
Below is one published mechanism for the reaction:
The mechanism starts with the amine (1) attacking the β carbon of the β-ketoesters (2), and eventually forming an enamine (3). The enamine then attacks the carbonyl carbon of the α-haloketone (4). This is followed by the loss of H2O, giving an imine (5). This intermediate undergoes an intramolecular nucleophilic attack, forming a 5-membered ring (6). Finally, a hydrogen is eliminated and the pi-bonds are rearranged in the ring, yielding the final product (7).
An alternative mechanism has been proposed in which the enamine (3) attacks the α-carbon of the α-haloketone (4) as part of a nucleophilic substitution, instead of attacking the carbonyl carbon.
Generalized Reaction Under Mechanochemical Conditions
A generalization of the Hantzsch pyrrole synthesis was developed by Estevez, et al. In this reaction highly substituted pyrroles can be synthesized in a one-pot reaction, with relatively high yields (60% - 97%). This reaction involves the high-speed vibration milling (HSVM) of ketones with N-iodosuccinimide (NIS) and p-toluenesulfonic acid, to form an α-iodoketone in situ. This is followed by addition of a primary amine, a β-dicarbonyl compound, cerium(IV) ammonium nitrate (CAN) and silver nitrate, as shown in the scheme below:
Applications
2,3-dicarbonylated pyrroles
2,3-dicarbonylated pyrroles can be synthesized by a version of the Hantzsch Pyrrole Synthesis. These pyrroles are particularly useful for total synthesis because the carbonyl groups can be converted into a variety of other functional groups.
Substituted indoles
The reaction can also occur between an enamine and an α-haloketone to synthesize substituted indoles, which also have biological significance.
Continuous flow chemistry
A library of substituted pyrrole analogs can be quickly produced by using continuous flow chemistry (reaction times of around 8 min.). The advantage of using this method, as opposed to the in-flask synthesis, is that this one does not require the work-up and purification of several intermediates, and could therefore lead to a higher percent yield.
See also
Hantzsch pyridine synthesis
References
Pyrroles
Chemical synthesis
Name reactions | Hantzsch pyrrole synthesis | [
"Chemistry"
] | 664 | [
"Name reactions",
"nan",
"Chemical synthesis"
] |
13,609,399 | https://en.wikipedia.org/wiki/Least-squares%20spectral%20analysis | Least-squares spectral analysis (LSSA) is a method of estimating a frequency spectrum based on a least-squares fit of sinusoids to data samples, similar to Fourier analysis. Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in the long and gapped records; LSSA mitigates such problems. Unlike in Fourier analysis, data need not be equally spaced to use LSSA.
Developed in 1969 and 1971, LSSA is also known as the Vaníček method and the Gauss-Vaniček method after Petr Vaníček, and as the Lomb method or the Lomb–Scargle periodogram, based on the simplifications first by Nicholas R. Lomb and then by Jeffrey D. Scargle.
Historical background
The close connections between Fourier analysis, the periodogram, and the least-squares fitting of sinusoids have been known for a long time.
However, most developments are restricted to complete data sets of equally spaced samples. In 1963, Freek J. M. Barning of Mathematisch Centrum, Amsterdam, handled unequally spaced data by similar techniques, including both a periodogram analysis equivalent to what nowadays is called the Lomb method and least-squares fitting of selected frequencies of sinusoids determined from such periodograms — and connected by a procedure known today as the matching pursuit with post-back fitting or the orthogonal matching pursuit.
Petr Vaníček, a Canadian geophysicist and geodesist of the University of New Brunswick, proposed in 1969 also the matching-pursuit approach for equally and unequally spaced data, which he called "successive spectral analysis" and the result a "least-squares periodogram". He generalized this method to account for any systematic components beyond a simple mean, such as a "predicted linear (quadratic, exponential, ...) secular trend of unknown magnitude", and applied it to a variety of samples, in 1971.
Vaníček's strictly least-squares method was then simplified in 1976 by Nicholas R. Lomb of the University of Sydney, who pointed out its close connection to periodogram analysis. Subsequently, the definition of a periodogram of unequally spaced data was modified and analyzed by Jeffrey D. Scargle of NASA Ames Research Center, who showed that, with minor changes, it becomes identical to Lomb's least-squares formula for fitting individual sinusoid frequencies.
Scargle states that his paper "does not introduce a new detection technique, but instead studies the reliability and efficiency of detection with the most commonly used technique, the periodogram, in the case where the observation times are unevenly spaced," and further points out regarding least-squares fitting of sinusoids compared to periodogram analysis, that his paper "establishes, apparently for the first time, that (with the proposed modifications) these two methods are exactly equivalent."
Press summarizes the development this way:
In 1989, Michael J. Korenberg of Queen's University in Kingston, Ontario, developed the "fast orthogonal search" method of more quickly finding a near-optimal decomposition of spectra or other problems, similar to the technique that later became known as the orthogonal matching pursuit.
Development of LSSA and variants
The Vaníček method
In the Vaníček method, a discrete data set is approximated by a weighted sum of sinusoids of progressively determined frequencies using a standard linear regression or least-squares fit. The frequencies are chosen using a method similar to Barning's, but going further in optimizing the choice of each successive new frequency by picking the frequency that minimizes the residual after least-squares fitting (equivalent to the fitting technique now known as matching pursuit with pre-backfitting). The number of sinusoids must be less than or equal to the number of data samples (counting sines and cosines of the same frequency as separate sinusoids).
A data vector Φ is represented as a weighted sum of sinusoidal basis functions, tabulated in a matrix A by evaluating each function at the sample times, with weight vector x:
,
where the weights vector x is chosen to minimize the sum of squared errors in approximating Φ. The solution for x is closed-form, using standard linear regression:
Here the matrix A can be based on any set of functions mutually independent (not necessarily orthogonal) when evaluated at the sample times; functions used for spectral analysis are typically sines and cosines evenly distributed over the frequency range of interest. If we choose too many frequencies in a too-narrow frequency range, the functions will be insufficiently independent, the matrix ill-conditioned, and the resulting spectrum meaningless.
When the basis functions in A are orthogonal (that is, not correlated, meaning the columns have zero pair-wise dot products), the matrix ATA is diagonal; when the columns all have the same power (sum of squares of elements), then that matrix is an identity matrix times a constant, so the inversion is trivial. The latter is the case when the sample times are equally spaced and sinusoids chosen as sines and cosines equally spaced in pairs on the frequency interval 0 to a half cycle per sample (spaced by 1/N cycles per sample, omitting the sine phases at 0 and maximum frequency where they are identically zero). This case is known as the discrete Fourier transform, slightly rewritten in terms of measurements and coefficients.
— DFT case for N equally spaced samples and frequencies, within a scalar factor.
The Lomb method
Trying to lower the computational burden of the Vaníček method in 1976 (no longer an issue), Lomb proposed using the above simplification in general, except for pair-wise correlations between sine and cosine bases of the same frequency, since the correlations between pairs of sinusoids are often small, at least when they are not tightly spaced. This formulation is essentially that of the traditional periodogram but adapted for use with unevenly spaced samples. The vector x is a reasonably good estimate of an underlying spectrum, but since we ignore any correlations, Ax is no longer a good approximation to the signal, and the method is no longer a least-squares method — yet in the literature continues to be referred to as such.
Rather than just taking dot products of the data with sine and cosine waveforms directly, Scargle modified the standard periodogram formula so to find a time delay first, such that this pair of sinusoids would be mutually orthogonal at sample times and also adjusted for the potentially unequal powers of these two basis functions, to obtain a better estimate of the power at a frequency. This procedure made his modified periodogram method exactly equivalent to Lomb's method. Time delay by definition equals to
Then the periodogram at frequency is estimated as:
,
which, as Scargle reports, has the same statistical distribution as the periodogram in the evenly sampled case.
At any individual frequency , this method gives the same power as does a least-squares fit to sinusoids of that frequency and of the form:
In practice, it is always difficult to judge if a given Lomb peak is significant or not, especially when the nature of the noise is unknown, so for example a false-alarm spectral peak in the Lomb periodogram analysis of noisy periodic signal may result from noise in turbulence data. Fourier methods can also report false spectral peaks when analyzing patched-up or data edited otherwise.
The generalized Lomb–Scargle periodogram
The standard Lomb–Scargle periodogram is only valid for a model with a zero mean. Commonly, this is approximated — by subtracting the mean of the data before calculating the periodogram. However, this is an inaccurate assumption when the mean of the model (the fitted sinusoids) is non-zero. The generalized Lomb–Scargle periodogram removes this assumption and explicitly solves for the mean. In this case, the function fitted is
The generalized Lomb–Scargle periodogram has also been referred to in the literature as a floating mean periodogram.
Korenberg's "fast orthogonal search" method
Michael Korenberg of Queen's University in Kingston, Ontario, developed a method for choosing a sparse set of components from an over-complete set — such as sinusoidal components for spectral analysis — called the fast orthogonal search (FOS). Mathematically, FOS uses a slightly modified Cholesky decomposition in a mean-square error reduction (MSER) process, implemented as a sparse matrix inversion. As with the other LSSA methods, FOS avoids the major shortcoming of discrete Fourier analysis, so it can accurately identify embedded periodicities and excel with unequally spaced data. The fast orthogonal search method was also applied to other problems, such as nonlinear system identification.
Palmer's Chi-squared method
Palmer has developed a method for finding the best-fit function to any chosen number of harmonics, allowing more freedom to find non-sinusoidal harmonic functions.
His is a fast (FFT-based) technique for weighted least-squares analysis on arbitrarily spaced data with non-uniform standard errors. Source code that implements this technique is available.
Because data are often not sampled at uniformly spaced discrete times, this method "grids" the data by sparsely filling a time series array at the sample times. All intervening grid points receive zero statistical weight, equivalent to having infinite error bars at times between samples.
Applications
The most useful feature of LSSA is enabling incomplete records to be spectrally analyzed — without the need to manipulate data or to invent otherwise non-existent data.
Magnitudes in the LSSA spectrum depict the contribution of a frequency or period to the variance of the time series. Generally, spectral magnitudes thus defined enable the output's straightforward significance level regime. Alternatively, spectral magnitudes in the Vaníček spectrum can also be expressed in dB. Note that spectral magnitudes in the Vaníček spectrum follow β-distribution.
Inverse transformation of Vaníček's LSSA is possible, as is most easily seen by writing the forward transform as a matrix; the matrix inverse (when the matrix is not singular) or pseudo-inverse will then be an inverse transformation; the inverse will exactly match the original data if the chosen sinusoids are mutually independent at the sample points and their number is equal to the number of data points. No such inverse procedure is known for the periodogram method.
Implementation
The LSSA can be implemented in less than a page of MATLAB code. In essence:
"to compute the least-squares spectrum we must compute m spectral values ... which involves performing the least-squares approximation m times, each time to get [the spectral power] for a different frequency"
I.e., for each frequency in a desired set of frequencies, sine and cosine functions are evaluated at the times corresponding to the data samples, and dot products of the data vector with the sinusoid vectors are taken and appropriately normalized; following the method known as Lomb/Scargle periodogram, a time shift is calculated for each frequency to orthogonalize the sine and cosine components before the dot product; finally, a power is computed from those two amplitude components. This same process implements a discrete Fourier transform when the data are uniformly spaced in time and the frequencies chosen correspond to integer numbers of cycles over the finite data record.
This method treats each sinusoidal component independently, or out of context, even though they may not be orthogonal to data points; it is Vaníček's original method. In addition, it is possible to perform a full simultaneous or in-context least-squares fit by solving a matrix equation and partitioning the total data variance between the specified sinusoid frequencies. Such a matrix least-squares solution is natively available in MATLAB as the backslash operator.
Furthermore, the simultaneous or in-context method, as opposed to the independent or out-of-context version (as well as the periodogram version due to Lomb), cannot fit more components (sines and cosines) than there are data samples, so that:
Lomb's periodogram method, on the other hand, can use an arbitrarily high number of, or density of, frequency components, as in a standard periodogram; that is, the frequency domain can be over-sampled by an arbitrary factor. However, as mentioned above, one should keep in mind that Lomb's simplification and diverging from the least squares criterion opened up his technique to grave sources of errors, resulting even in false spectral peaks.
In Fourier analysis, such as the Fourier transform and discrete Fourier transform, the sinusoids fitted to data are all mutually orthogonal, so there is no distinction between the simple out-of-context dot-product-based projection onto basis functions versus an in-context simultaneous least-squares fit; that is, no matrix inversion is required to least-squares partition the variance between orthogonal sinusoids of different frequencies. In the past, Fourier's was for many a method of choice thanks to its processing-efficient fast Fourier transform implementation when complete data records with equally spaced samples are available, and they used the Fourier family of techniques to analyze gapped records as well, which, however, required manipulating and even inventing non-existent data just so to be able to run a Fourier-based algorithm.
See also
Non-uniform discrete Fourier transform
Orthogonal functions
SigSpec
Sinusoidal model
Spectral density
Spectral density estimation, for competing alternatives
References
External links
LSSA package freeware download, FORTRAN, Vaníček's least-squares spectral analysis method, from the University of New Brunswick.
LSWAVE package freeware download, MATLAB, includes the Vaníček's least-squares spectral analysis method, from the U.S. National Geodetic Survey.
Algorithms
Analysis of variance
Applied mathematics
Applied statistics
Carl Friedrich Gauss
Computational mathematics
Computational science
Data processing
Digital signal processing
Engineering statistics
Frequency
Frequency-domain analysis
Harmonic analysis
Iterative methods
Least squares
Linear algebra
Mathematical analysis
Mathematical optimization
Mathematical physics
Mathematics of computing
Multivariate statistics
Numerical analysis
Numerical linear algebra
Optimization algorithms and methods
Signal processing
Statistical forecasting
Statistical methods
Statistical signal processing
Stochastic processes
Theoretical computer science
Time series | Least-squares spectral analysis | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 2,950 | [
"Physical quantities",
"Computer engineering",
"Theoretical computer science",
"Linear algebra",
"Mathematical analysis",
"Applied mathematics",
"Mathematical logic",
"Computational science",
"Wikipedia categories named after physical quantities",
"Approximations",
"Algebra",
"Scalar physical ... |
11,060,531 | https://en.wikipedia.org/wiki/Electron%20beam%20ion%20trap | Electron beam ion trap (EBIT) is an electromagnetic bottle that produces and confines highly charged ions. An EBIT uses an electron beam focused with a powerful magnetic field to ionize atoms to high charge states by successive electron impact.
It was invented by M. Levine and R. Marrs at LLNL and LBNL.
Operation
The positive ions produced in the region where the atoms intercept the electron beam are tightly confined in their motion by the strong attraction exerted by the negative charge of the electron beam. Therefore, they orbit around the electron beam, crossing it frequently and giving rise to further collisions and ionization. To restrict the ion motion along the direction of the electron beam axis, trapping electrodes carrying positive voltages with respect to a central electrode are used.
The resulting ion trap can hold ions for many seconds and minutes, and conditions for reaching the highest charge states, up to bare uranium (U92+) can be achieved in this way.
The strong charge needed for radial confinement of the ions requires large electron beam currents of tens up to hundreds of milliampere. At the same time, high voltages (up to 200 kilovolts) are used for accelerating the electrons in order to achieve high charge states of the ions.
To avoid charge reduction of ions by collisions with neutral atoms from which they can capture electrons, the vacuum in the apparatus is usually maintained at UHV levels, with typical pressure values of only 10−12 torr, (~10−10 pascal).
Applications
EBITs are used to investigate the fundamental properties of highly charged ions e. g. by photon spectroscopy in particular in the context of relativistic atomic structure theory and quantum electrodynamics (QED). Their suitability to prepare and reproduce in a microscopic volume the conditions of high temperature astrophysical plasmas and magnetic confinement fusion plasmas make them very appropriate research tools. Other fields include the study of their interactions with surfaces and possible applications to microlithography.
References
– First EBIT atomic spectroscopy measurement
External links
Concepts in astrophysics
Atomic physics
Electromagnetism
Electron beam
American inventions
Particle traps | Electron beam ion trap | [
"Physics",
"Chemistry"
] | 429 | [
"Electron",
"Physical phenomena",
"Electromagnetism",
"Molecular physics",
"Concepts in astrophysics",
"Electron beam",
"Quantum mechanics",
"Astrophysics",
"Particle traps",
"Atomic physics",
" molecular",
"Fundamental interactions",
"Atomic",
" and optical physics"
] |
11,063,933 | https://en.wikipedia.org/wiki/Rule%20184 | Rule 184 is a one-dimensional binary cellular automaton rule, notable for solving the majority problem as well as for its ability to simultaneously describe several, seemingly quite different, particle systems:
Rule 184 can be used as a simple model for traffic flow in a single lane of a highway, and forms the basis for many cellular automaton models of traffic flow with greater sophistication. In this model, particles (representing vehicles) move in a single direction, stopping and starting depending on the cars in front of them. The number of particles remains unchanged throughout the simulation. Because of this application, Rule 184 is sometimes called the "traffic rule".
Rule 184 also models a form of deposition of particles onto an irregular surface, in which each local minimum of the surface is filled with a particle in each step. At each step of the simulation, the number of particles increases. Once placed, a particle never moves.
Rule 184 can be understood in terms of ballistic annihilation, a system of particles moving both leftwards and rightwards through a one-dimensional medium. When two such particles collide, they annihilate each other, so that at each step the number of particles remains unchanged or decreases.
The apparent contradiction between these descriptions is resolved by different ways of associating features of the automaton's state with particles.
The name of Rule 184 is a Wolfram code that defines the evolution of its states. The earliest research on Rule 184 is by and . In particular, Krug and Spohn already describe all three types of particle system modeled by Rule 184.
Definition
A state of the Rule 184 automaton consists of a one-dimensional array of cells, each containing a binary value (0 or 1). In each step of its evolution, the Rule 184 automaton applies the following rule to each of the cells in the array, simultaneously for all cells, to determine the new state of the cell:
An entry in this table defines the new state of each cell as a function of the previous state and the previous values of the neighboring cells on either side.
The name for this rule, Rule 184, is the Wolfram code describing the state table above: the bottom row of the table, 10111000, when viewed as a binary number, is equal to the decimal number 184.
The rule set for Rule 184 may also be described intuitively, in several different ways:
At each step, whenever there exists in the current state a 1 immediately followed by a 0, these two symbols swap places. Based on this description, call Rule 184 a deterministic version of a "kinetic Ising model with asymmetric spin-exchange dynamics".
At each step, if a cell with value 1 has a cell with value 0 immediately to its right, the 1 moves rightwards leaving a 0 behind. A 1 with another 1 to its right remains in place, while a 0 that does not have a 1 to its left stays a 0. This description is most apt for the application to traffic flow modeling.
If a cell has state 0, its new state is taken from the cell to its left. Otherwise, its new state is taken from the cell to its right. That is, each cell can be implemented by a two-way demultiplexer with the two adjacent cells being inputs, and the cell itself acting as the selector line. Each cell's next state is determined by the demultiplexer's output. This operation is closely related to a Fredkin gate.
Dynamics and majority classification
From the descriptions of the rules above, two important properties of its dynamics may immediately be seen. First, in Rule 184, for any finite set of cells with periodic boundary conditions, the number of 1s and the number of 0s in a pattern remains invariant throughout the pattern's evolution. Rule 184 and its reflection are the only nontrivial elementary cellular automata to have this property of number conservation. Similarly, if the density of 1s is well-defined for an infinite array of cells, it remains invariant as the automaton carries out its steps. And second, although Rule 184 is not symmetric under left-right reversal, it does have a different symmetry: reversing left and right and at the same time swapping the roles of the 0 and 1 symbols produces a cellular automaton with the same update rule.
Patterns in Rule 184 typically quickly stabilize, either to a pattern in which the cell states move in lockstep one position leftwards at each step, or to a pattern that moves one position rightwards at each step. Specifically, if the initial density of cells with state 1 is less than 50%, the pattern stabilizes into clusters of cells in state 1, spaced two units apart, with the clusters separated by blocks of cells in state 0. Patterns of this type move rightwards. If, on the other hand, the initial density is greater than 50%, the pattern stabilizes into clusters of cells in state 0, spaced two units apart, with the clusters separated by blocks of cells in state 1, and patterns of this type move leftwards. If the density is exactly 50%, the initial pattern stabilizes (more slowly) to a pattern that can equivalently be viewed as moving either leftwards or rightwards at each step: an alternating sequence of 0s and 1s.
The majority problem is the problem of constructing a cellular automaton that, when run on any finite set of cells, can compute the value held by a majority of its cells.
In a sense, Rule 184 solves this problem, as follows. if Rule 184 is run on a finite set of cells with periodic boundary conditions, with an unequal number of 0s and 1s, then each cell will eventually see two consecutive states of the majority value infinitely often, but will see two consecutive states of the minority value only finitely many times. The majority problem cannot be solved perfectly if it is required that all cells eventually stabilize to the majority state but the Rule 184 solution avoids this impossibility result by relaxing the criterion by which the automaton recognizes a majority.
Traffic flow
If one interprets each 1-cell in Rule 184 as containing a particle, these particles behave in many ways similarly to automobiles in a single lane of traffic: they move forward at a constant speed if there is open space in front of them, and otherwise they stop. Traffic models such as Rule 184 and its generalizations that discretize both space and time are commonly called particle-hopping models. Although very primitive, the Rule 184 model of traffic flow already predicts some of the familiar emergent features of real traffic: clusters of freely moving cars separated by stretches of open road when traffic is light, and waves of stop-and-go traffic when it is heavy.
It is difficult to pinpoint the first use of Rule 184 for traffic flow simulation, in part because the focus of research in this area has been less on achieving the greatest level of mathematical abstraction and more on verisimilitude: even the earlier papers on cellular automaton based traffic flow simulation typically make the model more complex in order to more accurately simulate real traffic. Nevertheless, Rule 184 is fundamental to traffic simulation by cellular automata. , for instance, state that "the basic cellular automaton model describing a one-dimensional traffic flow problem is rule 184." writes "Much work using CA models for traffic is based on this model." Several authors describe one-dimensional models with vehicles moving at multiple speeds; such models degenerate to Rule 184 in the single-speed case. extend the Rule 184 dynamics to two-lane highway traffic with lane changes; their model shares with Rule 184 the property that it is symmetric under simultaneous left-right and 0-1 reversal. describe a two-dimensional city grid model in which the dynamics of individual lanes of traffic is essentially that of Rule 184. For an in-depth survey of cellular automaton traffic modeling and associated statistical mechanics, see and .
When viewing Rule 184 as a traffic model, it is natural to consider the average speed of the vehicles. When the density of traffic is less than 50%, this average speed is simply one unit of distance per unit of time: after the system stabilizes, no car ever slows. However, when the density is a number ρ greater than 1/2, the average speed of traffic is . Thus, the system exhibits a second-order kinetic phase transition at . When Rule 184 is interpreted as a traffic model, and started from a random configuration whose density is at this critical value , then the average speed approaches its stationary limit as the square root of the number of steps. Instead, for random configurations whose density is not at the critical value, the approach to the limiting speed is exponential.
Surface deposition
As shown in the figure, and as originally described by , Rule 184 may be used to model deposition of particles onto a surface. In this model, one has a set of particles that occupy a subset of the positions in a square lattice oriented diagonally (the darker particles in the figure). If a particle is present at some position of the lattice, the lattice positions below and to the right, and below and to the left of the particle must also be filled, so the filled part of the lattice extends infinitely downward to the left and right. The boundary between filled and unfilled positions (the thin black line in the figure) is interpreted as modeling a surface, onto which more particles may be deposited. At each time step, the surface grows by the deposition of new particles in each local minimum of the surface; that is, at each position where it is possible to add one new particle that has existing particles below it on both sides (the lighter particles in the figure).
To model this process by Rule 184, observe that the boundary between filled and unfilled lattice positions can be marked by a polygonal line, the segments of which separate adjacent lattice positions and have slopes +1 and −1. Model a segment with slope +1 by an automaton cell with state 0, and a segment with slope −1 by an automaton cell with state 1. The local minima of the surface are the points where a segment of slope −1 lies to the left of a segment of slope +1; that is, in the automaton, a position where a cell with state 1 lies to the left of a cell with state 0. Adding a particle to that position corresponds to changing the states of these two adjacent cells from 1,0 to 0,1, so advancing the polygonal line. This is exactly the behavior of Rule 184.
Related work on this model concerns deposition in which the arrival times of additional particles are random, rather than having particles arrive at all local minima simultaneously. These stochastic growth processes can be modeled as an asynchronous cellular automaton.
Ballistic annihilation
Ballistic annihilation describes a process by which moving particles and antiparticles annihilate each other when they collide. In the simplest version of this process, the system consists of a single type of particle and antiparticle, moving at equal speeds in opposite directions in a one-dimensional medium.
This process can be modeled by Rule 184, as follows. The particles are modeled as points that are aligned, not with the cells of the automaton, but rather with the interstices between cells. Two consecutive cells that both have state 0 model a particle at the space between these two cells that moves rightwards one cell at each time step. Symmetrically, two consecutive cells that both have state 1 model an antiparticle that moves leftwards one cell at each time step. The remaining possibilities for two consecutive cells are that they both have differing states; this is interpreted as modeling a background material without any particles in it, through which the particles move. With this interpretation, the particles and antiparticles interact by ballistic annihilation: when a rightwards-moving particle and a leftwards-moving antiparticle meet, the result is a region of background from which both particles have vanished, without any effect on any other nearby particles.
The behavior of certain other systems, such as one-dimensional cyclic cellular automata, can also be described in terms of ballistic annihilation. There is a technical restriction on the particle positions for the ballistic annihilation view of Rule 184 that does not arise in these other systems, stemming from the alternating pattern of the background: in the particle system corresponding to a Rule 184 state, if two consecutive particles are both of the same type they must be an odd number of cells apart, while if they are of opposite types they must be an even number of cells apart. However this parity restriction does not play a role in the statistical behavior of this system.
uses a similar but more complicated particle-system view of Rule 184: he not only views alternating 0–1 regions as background, but also considers regions consisting solely of a single state to be background as well. Based on this view he describes seven different particles formed by boundaries between regions, and classifies their possible interactions. See for a more general survey of the cellular automaton models of annihilation processes.
Context-free parsing
In his book A New Kind of Science, Stephen Wolfram points out that rule 184, when run on patterns with density 50%, can be interpreted as parsing the context-free language describing strings formed from nested parentheses. This interpretation is closely related to the ballistic annihilation view of rule 184: in Wolfram's interpretation, an open parenthesis corresponds to a left-moving particle while a close parenthesis corresponds to a right-moving particle.
See also
Rule 30, Rule 90, and Rule 110, other one-dimensional cellular automata with different behavior
Notes
References
External links
Rule 184 in Wolfram's atlas of cellular automata
Cellular automaton rules
Lattice models
Wolfram code
Traffic flow | Rule 184 | [
"Physics",
"Materials_science"
] | 2,854 | [
"Statistical mechanics",
"Condensed matter physics",
"Lattice models",
"Computational physics"
] |
11,064,788 | https://en.wikipedia.org/wiki/Prismatic%20uniform%20polyhedron | In geometry, a prismatic uniform polyhedron is a uniform polyhedron with dihedral symmetry. They exist in two infinite families, the uniform prisms and the uniform antiprisms. All have their vertices in parallel planes and are therefore prismatoids.
Vertex configuration and symmetry groups
Because they are isogonal (vertex-transitive), their vertex arrangement uniquely corresponds to a symmetry group.
The difference between the prismatic and antiprismatic symmetry groups is that Dph has the vertices lined up in both planes, which gives it a reflection plane perpendicular to its p-fold axis (parallel to the {p/q} polygon); while Dpd has the vertices twisted relative to the other plane, which gives it a rotatory reflection. Each has p reflection planes which contain the p-fold axis.
The Dph symmetry group contains inversion if and only if p is even, while Dpd contains inversion symmetry if and only if p is odd.
Enumeration
There are:
prisms, for each rational number p/q > 2, with symmetry group Dph;
antiprisms, for each rational number p/q > 3/2, with symmetry group Dpd if q is odd, Dph if q is even.
If p/q is an integer, i.e. if q = 1, the prism or antiprism is convex. (The fraction is always assumed to be stated in lowest terms.)
An antiprism with p/q < 2 is crossed or retrograde; its vertex figure resembles a bowtie. If p/q < 3/2 no uniform antiprism can exist, as its vertex figure would have to violate the triangle inequality. If p/q = 3/2 the uniform antiprism is degenerate (has zero height).
Forms by symmetry
Note: The tetrahedron, cube, and octahedron are listed here with dihedral symmetry (as a digonal antiprism, square prism and triangular antiprism respectively), although if uniformly colored, the tetrahedron also has tetrahedral symmetry and the cube and octahedron also have octahedral symmetry.
See also
Uniform polyhedron
Prism (geometry)
Antiprism
References
Cromwell, P.; Polyhedra, CUP, Hbk. 1997, . Pbk. (1999), . p.175
.
External links
Prisms and Antiprisms George W. Hart
Prismatoid polyhedra
Uniform polyhedra | Prismatic uniform polyhedron | [
"Physics"
] | 508 | [
"Uniform polytopes",
"Uniform polyhedra",
"Symmetry"
] |
11,070,790 | https://en.wikipedia.org/wiki/Maximum%20entropy%20spectral%20estimation | Maximum entropy spectral estimation is a method of spectral density estimation. The goal is to improve the spectral quality based on the principle of maximum entropy. The method is based on choosing the spectrum which corresponds to the most random or the most unpredictable time series whose autocorrelation function agrees with the known values. This assumption, which corresponds to the concept of maximum entropy as used in both statistical mechanics and information theory, is maximally non-committal with regard to the unknown values of the autocorrelation function of the time series. It is simply the application of maximum entropy modeling to any type of spectrum and is used in all fields where data is presented in spectral form. The usefulness of the technique varies based on the source of the spectral data since it is dependent on the amount of assumed knowledge about the spectrum that can be applied to the model.
In maximum entropy modeling, probability distributions are created on the basis of that which is known, leading to a type of statistical inference about the missing information which is called the maximum entropy estimate. For example, in spectral analysis the expected peak shape is often known, but in a noisy spectrum the center of the peak may not be clear. In such a case, inputting the known information allows the maximum entropy model to derive a better estimate of the center of the peak, thus improving spectral accuracy.
Method description
In the periodogram approach to calculating the power spectra, the sample autocorrelation function is multiplied by some window function and then Fourier transformed. The window is applied to provide statistical stability as well as to avoid leakage from other parts of the spectrum. However, the window limits the spectral resolution.
Maximum entropy method attempts to improve the spectral resolution by extrapolating the correlation function beyond the maximum lag in such a way that the entropy of the corresponding probability density function is maximized in each step of the extrapolation.
The maximum entropy rate stochastic process that satisfies the given empirical autocorrelation and variance constraints is an autoregressive model with independent and identically distributed zero-mean Gaussian input.
Therefore, the maximum entropy method is equivalent to least-squares fitting the available time series data to an autoregressive model
where the are independent and identically distributed as . The unknowns coefficients are found using least-square method. Once the autoregressive coefficients have been determined, the spectrum of the time series data is estimated by evaluating the power spectral density function of the fitted autoregressive model
where is the sampling period and is the imaginary unit.
References
Cover, T. and Thomas, J. (1991) Elements of Information Theory. John Wiley and Sons, Inc.
Burg J.P. (1967). Maximum Entropy Spectral Analysis. Proceedings of 37th Meeting, Society of Exploration Geophysics, Oklahoma City.
External links
kSpectra Toolkit for Mac OS X from SpectraWorks.
memspectum: a python package for maximum entropy spectral estimation with python
Entropy
Information theory
Statistical signal processing
Spectroscopy | Maximum entropy spectral estimation | [
"Physics",
"Chemistry",
"Mathematics",
"Technology",
"Engineering"
] | 611 | [
"Thermodynamic properties",
"Telecommunications engineering",
"Molecular physics",
"Spectrum (physical sciences)",
"Physical quantities",
"Statistical signal processing",
"Instrumental analysis",
"Quantity",
"Applied mathematics",
"Computer science",
"Entropy",
"Information theory",
"Enginee... |
8,903,697 | https://en.wikipedia.org/wiki/Nifurtimox | Nifurtimox, sold under the brand name Lampit, is a medication used to treat Chagas disease and sleeping sickness. For sleeping sickness it is used together with eflornithine in nifurtimox-eflornithine combination treatment. In Chagas disease it is a second-line option to benznidazole. It is given by mouth.
Common side effects include abdominal pain, headache, nausea, and weight loss. There are concerns from animal studies that it may increase the risk of cancer but these concerns have not been found in human trials. Nifurtimox is not recommended in pregnancy or in those with significant kidney or liver problems. It is a type of nitrofuran.
Nifurtimox came into medication use in 1965. It is on the World Health Organization's List of Essential Medicines. It is not available commercially in Canada. It was approved for medical use in the United States in August 2020. In regions of the world where the disease is common nifurtimox is provided for free by the World Health Organization (WHO).
Medical uses
Nifurtimox has been used to treat Chagas disease, when it is given for 30 to 60 days. However, long-term use of nifurtimox does increase chances of adverse events like gastrointestinal and neurological side effects. Due to the low tolerance and completion rate of nifurtimox, benznidazole is now being more considered for those who have Chagas disease and require long-term treatment.
In the United States nifurtimox is indicated in children and adolescents (birth to less than 18 years of age and weighing at least for the treatment of Chagas disease (American Trypanosomiasis), caused by Trypanosoma cruzi.
Nifurtimox has also been used to treat African trypanosomiasis (sleeping sickness), and is active in the second stage of the disease (central nervous system involvement). When nifurtimox is given on its own, about half of all patients will relapse, but the combination of melarsoprol with nifurtimox appears to be efficacious. Trials are awaited comparing melarsoprol/nifurtimox against melarsoprol alone for African sleeping sickness.
Combination therapy with eflornithine and nifurtimox is safer and easier than treatment with eflornithine alone, and appears to be equally or more effective. It has been recommended as first-line treatment for second-stage African trypanosomiasis.
Pregnancy and breastfeeding
Use of nifurtimox should be avoided in pregnant women due to limited use. There is limited data shown that nifurtimox doses up to 15 mg/kg daily can cause adverse effects in breastfed infants. Other authors do not consider breastfeeding a contraindication during nifurtimox use.
Side effects
Side effects occur following chronic administration, particularly in elderly people.
Major toxicities include immediate hypersensitivity such as anaphylaxis and delayed hypersensitivity reaction involving icterus and dermatitis. Central nervous system disturbances and peripheral neuropathy may also occur.
Most common side effects
anorexia
weight loss
nausea
vomiting
headache
dizziness
amnesia
Less common effects
rash
depression
anxiety
confusion
fever
sore throat
chills
seizures
impotence
tremors
muscle weakness
numbness of hands or feet
Contraindications
Nifurtimox is contraindicated in people with severe liver or kidney disease, as well as people with a background of neurological or psychiatric disorders.
Mechanism of action
Nifurtimox forms a nitro-anion radical metabolite that reacts with nucleic acids of the parasite causing significant breakdown of DNA. Its mechanism is similar to that proposed for the antibacterial action of metronidazole. Nifurtimox undergoes reduction and creates oxygen radicals such as superoxide. These radicals are toxic to T. cruzi. Mammalian cells are protected by presence of catalase, glutathione, peroxidases, and superoxide dismutase. Accumulation of hydrogen peroxide to cytotoxic levels results in parasite death.
Society and culture
Legal status
Nifurtimox is licensed for use in Argentina, the United States, Turkey and Germany amongst others. It was approved for medical use in the United States in August 2020.
Names
Research
Nifurtimox is in a phase-II clinical trial for the treatment of pediatric neuroblastoma and medulloblastoma.
References
External links
Antiprotozoal agents
Chagas disease
Drugs developed by Bayer
Hydrazones
Nitrofurans
Sulfones
Thiomorpholines
World Health Organization essential medicines
Wikipedia medicine articles ready to translate
Orphan drugs | Nifurtimox | [
"Chemistry",
"Biology"
] | 968 | [
"Antiprotozoal agents",
"Functional groups",
"Sulfones",
"Hydrazones",
"Biocides"
] |
8,906,733 | https://en.wikipedia.org/wiki/Oseltamivir%20total%20synthesis | Oseltamivir total synthesis concerns the total synthesis of the anti-influenza drug oseltamivir marketed by Hoffmann-La Roche under the trade name Tamiflu. Its commercial production starts from the biomolecule shikimic acid harvested from Chinese star anise and from recombinant E. coli. Control of stereochemistry is important: the molecule has three stereocenters and the sought-after isomer is only 1 of 8 stereoisomers.
Commercial production
The current production method is based on the first scalable synthesis developed by Gilead Sciences starting from naturally occurring quinic acid or shikimic acid. Due to lower yields and the extra steps required (because of the additional dehydration), the quinic acid route was dropped in favour of the one based on shikimic acid, which received further improvements by Hoffmann-La Roche.
The current industrial synthesis is summarised below:
Karpf / Trussardi synthesis
The current production method includes two reaction steps with potentially hazardous azides. A reported azide-free Roche synthesis of tamiflu is summarised graphically below:
The synthesis commences from naturally available (−)-shikimic acid. The 3,4-pentylidene acetal mesylate is prepared in three steps: esterification with ethanol and thionyl chloride; ketalization with p-toluenesulfonic acid and 3-pentanone; and mesylation with triethylamine and methanesulfonyl chloride. Reductive opening of the ketal under modified Hunter conditions in dichloromethane yields an inseparable mixture of isomeric mesylates. The corresponding epoxide is formed under basic conditions with potassium bicarbonate. Using the inexpensive Lewis acid magnesium bromide diethyl etherate (commonly prepared fresh by the addition of magnesium
turnings to 1,2-dibromoethane in benzene:diethyl ether), the epoxide is opened with allyl amine to yield the corresponding 1,2-amino alcohol. The water-immiscible solvents methyl tert-butyl ether and acetonitrile are used to simplify the workup procedure, which involved stirring with 1 M aqueous ammonium sulfate. Reduction on palladium, promoted by ethanolamine, followed by acidic workup yielded the deprotected 1,2-aminoalcohol. The aminoalcohol was converted directly to the corresponding allyl-diamine in an interesting cascade sequence that commences with the unselective imination of benzaldehyde with azeotropic water removal in methyl tert-butyl ether. Mesylation, followed by removal of the solid byproduct triethylamine hydrochloride, results in an intermediate that was poised to undergo aziridination upon transimination with another equivalent of allylamine. With the librated methanesulfonic acid, the
aziridine opens cleanly to yield a diamine that immediately undergoes a second transimination. Acidic hydrolysis then removed the imine. Selective acylation with acetic anhydride (under buffered conditions, the 5-amino group is protonated owing to a considerable difference in pKa, 4.2 vs 7.9, preventing acetylation) yields the desired N-acetylated product in crystalline form upon extractive workup. Finally, deallylation as above, yielded the freebase of oseltamivir, which was converted to the desired oseltamivir phosphate by treatment with phosphoric acid. The final product is obtained in high purity (99.7%) and an overall yield of 17-22% from (−)-shikimic acid. It is noted that the synthesis avoids the use of potentially explosive azide reagents and intermediates; however, the synthesis actually used by Roche uses azides. Roche has other routes to
oseltamivir that do not involve the use of (−)-shikimic acid as a chiral pool starting material, such as a Diels-Alder route involving furan and ethyl acrylate or an isophthalic acid route, which involves catalytic hydrogenation and enzymatic desymmetrization.
Corey synthesis
In 2006 the group of E.J. Corey published a novel route bypassing shikimic acid starting from butadiene and acrylic acid. The inventors chose not to patent this procedure which is described below.
Butadiene 1 reacts in an asymmetric Diels-Alder reaction with the esterification product of acrylic acid and 2,2,2-trifluoroethanol 2 catalysed by the CBS catalyst. The ester 3 is converted into an amide in 4 by reaction with ammonia and the next step to lactam 5 is an iodolactamization with iodine initiated by trimethylsilyl triflate. The amide group is fitted with a BOC protecting group by reaction with Boc anhydride in 6 and the iodine substituent is removed in an elimination reaction with DBU to the alkene 7. Bromine is introduced in 8 by an allylic bromination with NBS and the amide group is cleaved with ethanol and caesium carbonate accompanied by elimination of bromide to the diene ethyl ester 9. The newly formed double bond is functionalized with N-bromoacetamide 10 catalyzed with
tin(IV) bromide with complete control of stereochemistry. In the next step the bromine atom in 11 is displaced by the nitrogen atom in the amide group with the strong base KHMDS to the aziridine 12 which in turn is opened by reaction with 3-pentanol 13 to the ether 14. In the final step the BOC group is removed with phosphoric acid and the oseltamivir phosphate 15 is formed.
Shibasaki synthesis
Also in 2006 the group of Masakatsu Shibasaki of the University of Tokyo published a synthesis again bypassing shikimic acid.
An improved method published in 2007 starts with the enantioselective desymmetrization of aziridine 1 with trimethylsilyl azide (TMSN3) and a chiral catalyst to the azide 2. The amide group is protected as a BOC group with Boc anhydride and DMAP in 3 and iodolactamization with iodine and potassium carbonate first gives the unstable intermediate 4 and then stable cyclic carbamate 5 after elimination of hydrogen iodide with DBU.
The amide group is reprotected as BOC 6 and the azide group converted to the amide 7 by reductive acylation with thioacetic acid and 2,6-lutidine. Caesium carbonate accomplishes the hydrolysis of the carbamate group to the alcohol 8 which is subsequently oxidized to ketone 9 with Dess-Martin periodinane. Cyanophosphorylation with diethyl phosphorocyanidate (DEPC) modifies the ketone group to the cyanophosphate 10 paving the way for an intramolecular allylic rearrangement to unstable β-allyl phosphate 11 (toluene, sealed tube) which is hydrolyzed to alcohol 12 with ammonium chloride. This hydroxyl group has the wrong stereochemistry and is therefore inverted in a Mitsunobu reaction with p-nitrobenzoic acid followed by hydrolysis of the p-nitrobenzoate to 13.
A second Mitsunobu reaction then forms the aziridine 14 available for ring-opening reaction with 3-pentanol catalyzed by boron trifluoride to ether 15. In the final step the BOC group is removed (HCl) and phosphoric acid added to objective 16.
Fukuyama synthesis
An approach published in 2007 like Corey's starts by an asymmetric Diels-Alder reaction this time with starting materials pyridine and acrolein.
Pyridine (1) is reduced with sodium borohydride in presence of benzyl chloroformate to the Cbz protected dihydropyridine 2. The asymmetric Diels-Alder reaction with acrolein 3 is carried out with the McMillan catalyst to the aldehyde 4 as the endo isomer which is oxidized to the carboxylic acid 5 with sodium chlorite, monopotassium phosphate and 2-methyl-2-butene. Addition of bromine gives halolactonization product 6 and after replacement of the Cbz protective group by a BOC protective group in 7 (hydrogenolysis in the presence of di-tert-butyl dicarbonate) a carbonyl group is introduced in intermediate 8 by catalytic ruthenium(IV) oxide and sacrificial catalyst sodium periodate. Addition of ammonia cleaves the ester group to form amide 9 the alcohol group of which is mesylated to compound 10. In the next step iodobenzene diacetate is added, converting the amide in a Hofmann rearrangement to the allyl carbamate 12 after capturing the intermediate isocyanate with allyl alcohol 11. On addition of sodium ethoxide in ethanol three reactions take place simultaneously: cleavage of the amide to form new an ethyl ester group, displacement of the mesyl group by newly formed BOC protected amine to an aziridine group and an elimination reaction forming the alkene group in 13 with liberation of HBr. In the final two steps the aziridine ring is opened by 3-pentanol 14 and boron trifluoride to aminoether 15 with the BOC group replaced by an acyl group and on removal of the other amine protecting group (Pd/C, Ph3P, and 1,3-dimethylbarbituric acid in ethanol) and addition of phosphoric acid oseltamivir 16 is obtained.
Trost synthesis
In 2008 the group of Barry M. Trost of Stanford University published the shortest synthetic route to date.
Hayashi synthesis
In 2009, Hayashi et al. successfully produced an efficient, low cost synthetic route to prepare (-)-oseltamivir (1). Their goal was to design a procedure that would be suitable for large-scale production. Keeping cost, yield, and number of synthetic steps in mind, an enantioselective total synthesis of (1) was accomplished through three one-pot operations. Hayashi et al.'s use of one-pot operations allowed them to perform several reactions steps in a single pot, which ultimately minimized the number of purification steps needed, waste, and saved time.
In the first one-pot operation, Hayashi et al. begins by using diphenylprolinol silyl ether (4) as an organocatalyst, along with alkoxyaldehyde (2) and nitroalkene (3) to perform an asymmetric Michael reaction, affording an enantioselective Michael adduct. Upon addition of a diethyl vinylphosphate derivative (5) to the Michael adduct, a domino Michael reaction and Horner-Wadsworth-Emmons reaction occurs due to the phosphonate group produced from (5) to give an ethyl cyclohexenecarboxylate derivative along with two unwanted by-products. To transform the undesired by-products into the desired ethyl cyclohexencarboxylate derivative, the mixture of the product and by-products was treated with Cs2CO3 in ethanol. This induced a retro-Michael reaction on one by-product and a retro-aldol reaction accompanied with a Horner-Wadsworth-Emmons reaction for the other. Both by-products were successfully converted to the desired derivative. Finally, the addition of p-toluenethiol with Cs2CO3 gives (6) in a 70% yield after being purified by column chromatography, with the desired isomer dominating.
In the second one-pot operation, trifluoroacetic acid is employed first to deprotect the tert-butyl ester of (6); any excess reagent was removed via evaporation. The carboxylic acid produced as a result of the deprotection was then converted to an acyl chloride by oxalyl chloride and a catalytic amount of DMF. Finally, addition of sodium azide, in the last reaction of the second one-pot operation, produce the acyl azide (7) without any purification needed.
The final one-pot operation begins with a Curtius Rearrangement of acyl azide (7) to produce an isocyanate functional group at room temperature. The isocyanate derivative then reacts with acetic acid to yield the desired acetylamino moiety found in (1). This domino Curtius rearrangement and amide formation occurs in the absence of heat, which is extremely beneficial for reducing any possible hazard. The nitro moiety of (7) is reduced to the desired amine observed in (1) with Zn/HCl. Due to the harsh conditions of the nitro reduction, ammonia was used to neutralize the reaction. Potassium carbonate was then added to give (1), via a retro-Michael reaction of the thiol. (1) was then purified by an acid/base extraction. The overall yield for the total synthesis of (-)-oseltamivir is 57%. Hayashi et al. use of inexpensive, non-hazardous reagents has allowed for an efficient, high yielding synthetic route that can allow for vast amount of novel derivatives to be produced in hopes of combatting against viruses resistant to (-)-oseltamivir.
References
External links
Oseltamivir Total Syntheses @ SynArchive.com
Total synthesis
Neuraminidase inhibitors | Oseltamivir total synthesis | [
"Chemistry"
] | 2,945 | [
"Total synthesis",
"Glycobiology",
"Neuraminidase inhibitors",
"Chemical synthesis"
] |
7,401,066 | https://en.wikipedia.org/wiki/HiPER | The High Power laser Energy Research facility (HiPER), is a proposed experimental laser-driven inertial confinement fusion (ICF) device undergoing preliminary design for possible construction in the European Union. , the effort appears to be inactive.
HiPER was designed to study the "fast ignition" approach to generating nuclear fusion, which uses much smaller lasers than conventional ICF designs, yet produces fusion power outputs of about the same magnitude. This offers a total "fusion gain" that is much higher than devices like the National Ignition Facility (NIF), and a reduction in construction costs of about ten times. This opened a window for a small machine to be rapidly built that would reach ignition before NIF. HiPER and the Japanese FIREX designs intended to explore this approach.
However, research into the fast ignition approach on smaller machines like the Omega laser in the US demonstrated a number of problems with the concept. Another alternative approach, shock ignition, began to take over future development starting around 2012. HiPER and FIREX both appear to have seen no additional development since that time.
HiPER should not be confused with an earlier ICF device in Japan known as "HIPER", which has not been operational for some time.
Background
Inertial confinement fusion (ICF) devices use "drivers" to rapidly heat the outer layers of a "target" to compress it. The target is a small spherical pellet containing a few milligrams of fusion fuel, typically a mix of deuterium and tritium, or "D-T". The heat of the laser burns the surface of the pellet into a plasma, which explodes off the surface. The remaining portion of the target is driven inward due to Newton's Third Law, collapsing into a small point of very high density. The rapid blowoff also creates a shock wave that travels toward the center of the compressed fuel. When it reaches the center of the fuel and meets the shock from the other side of the target, the energy in the center further heats and compresses the tiny volume around it. If the temperature and density of that small spot can be raised high enough, fusion reactions will occur. This approach is now known as "hot-spot ignition" to distinguish it from new approaches.
The fusion reactions release high-energy particles, some of which (primarily alpha particles) collide with the high density fuel around it and slow down. This heats the surrounding fuel, and can potentially cause that fuel to undergo fusion as well. Given the right overall conditions of the compressed fuel – high enough density and temperature – this heating process can result in a chain reaction, burning outward from the center. This is a condition known as "ignition", which can lead to a significant portion of the fuel in the target undergoing fusion, and the release of significant amounts of energy.
To date most ICF experiments have used lasers to heat the targets. Calculations show that the energy must be delivered quickly to compress the core before it disassembles, as well as creating a suitable shock wave. The energy must also be focused extremely evenly across the target's outer surface to collapse the fuel into a symmetric core. Although other drivers have been suggested, notably heavy ions driven in particle accelerators, lasers are currently the only devices with the right combination of features.
Description
In the case of HiPER, the driver laser system is similar to existing systems like NIF, but considerably smaller and less powerful.
The driver consists of a number of "beamlines" containing Nd:glass laser amplifiers at one end of the building. Just prior to firing, the glass is "pumped" to a high-energy state with a series of xenon flash tubes, causing a population inversion of the neodymium (Nd) atoms in the glass. This readies them for amplification via stimulated emission when a small amount of laser light, generated externally in a fibre optic, is fed into the beamlines. The glass is not particularly effective at transferring power into the beam, so to get as much power as possible back out, the beam is reflected through the glass four times in a mirrored cavity, each time gaining more power. When this process is complete, a Pockels cell switches the light out of the cavity. One problem for the HiPER project is that Nd:glass is no longer being produced commercially, so a number of options need to be studied to ensure supply of the estimated 1,300 disks.
From there, the laser light is fed into a very long spatial filter to clean up the resulting pulse. The filter is essentially a telescope that focuses the beam into a spot some distance away, where a small pinhole located at the focal point cuts off any "stray" light caused by inhomogeneities in the laser beam. The beam then widens out until a second lens returns it to a straight beam again. It is the use of spatial filters that lead to the long beamlines seen in ICF laser devices. In the case of HiPER, the filters take up about 50% of the overall length. The beam width at exit of the driver system is about 40 cm × 40 cm.
One of the problems encountered in previous experiments, notably the Shiva laser, was that the infrared light provided by the Nd:glass lasers (at ~1054 nm in vaco) couples strongly with the electrons around the target, losing a considerable amount of energy that would otherwise heat the target itself. This is typically addressed through the use of an optical frequency multiplier, which can double or triple the frequency of the light, into the green or ultraviolet, respectively. These higher frequencies interact less strongly with the electrons, putting more power into the target. HiPER will use frequency tripling on the drivers.
When the amplification process is complete the laser light enters the experimental chamber, lying at one end of the building. Here it is reflected off a series of deformable mirrors that help correct remaining imperfections in the wavefront, and then feeds them into the target chamber from all angles. Since the overall distances from the ends of the beamlines to different points on the target chamber are different, delays are introduced on the individual paths to ensure they all reach the center of the chamber at the same time, within about 10 picoseconds (ps). The target, a fusion fuel pellet about 1 mm in diameter in the case of HiPER, lies at the center of the chamber.
HiPER differs from most ICF devices in that it also includes a second set of lasers for directly heating the compressed fuel. The heating pulse needs to be very short, about 10 to 20 ps long, but this is too short a time for the amplifiers to work well. To solve this problem HiPER uses a technique known as chirped pulse amplification (CPA). CPA starts with a short pulse from a wide-bandwidth (multi-frequency) laser source, as opposed to the driver which uses a monochromatic (single-frequency) source. Light from this initial pulse is split into different colours using a pair of diffraction gratings and optical delays. This "stretches" the pulse into a chain several nanoseconds long. The pulse is then sent into the amplifiers as normal. When it exits the beamlines it is recombined in a similar set of gratings to produce a single very short pulse, but because the pulse now has very high power, the gratings have to be large (approx 1 m) and sit in a vacuum. Additionally the individual beams must be lower in power overall; the compression side of the system uses 40 beamlines of about 5 kJ each to generate a total of 200 kJ, whereas the ignition side requires 24 beamlines of just under 3 kJ to generate a total of 70 kJ. The precise number and power of the beamlines are currently a subject of research. Frequency multiplication will also be used on the heaters, but it has not yet been decided whether to use doubling or tripling; the latter puts more power into the target, but is less efficient converting the light. As of 2007, the baseline design is based on doubling into the green.
Fast Ignition and HiPER
In traditional ICF devices the driver laser is used to compress the target to very high densities. The shock wave created by this process further heats the compressed fuel when it collides in the center of the sphere. If the compression is symmetrical enough the increase in temperature can create conditions close to the Lawson criterion and lead to ignition.
The amount of laser energy needed to effectively compress the targets to ignition conditions has grown rapidly from early estimates. In the "early days" of ICF research in the 1970s it was believed that as little as 1 kilojoules (kJ) would suffice, and a number of experimental lasers were built to reach these power levels. When they did, a series of problems, typically related to the homogeneity of the collapse, turned out to seriously disrupt the implosion symmetry and lead to much cooler core temperatures than originally expected. Through the 1980s the estimated energy required to reach ignition grew into the megajoule range, which appeared to make ICF impractical for fusion energy production. For instance, the National Ignition Facility (NIF) uses about 420 MJ of electrical power to pump the driver lasers, and in the best case is expected to produce about 20 MJ of fusion power output. Without dramatic gains in output, such a device would never be a practical energy source.
The fast ignition approach attempts to avoid these problems. Instead of using the shock wave to create the conditions needed for fusion above the ignition range, this approach directly heats the fuel. This is far more efficient than the shock wave, which becomes less important. In HiPER, the compression provided by the driver is "good", but not nearly that created by larger devices like NIF; HiPER's driver is about 200 kJ and produces densities of about 300 g/cm3. That's about one-third that of NIF, and about the same as generated by the earlier NOVA laser of the 1980s. For comparison, lead is about 11 g/cm3, so this still represents a considerable amount of compression, notably when one considers the target's interior contained light D-T fuel around 0.1 g/cm3.
Ignition is started by a very-short (~10 picoseconds) ultra-high-power (~70 kJ, 4 PW) laser pulse, aimed through a hole in the plasma at the core. The light from this pulse interacts with the cool surrounding fuel, generating a shower of high-energy (3.5 MeV) relativistic electrons that are driven into the fuel. The electrons heat a spot on one side of the dense core, and if this heating is localised enough it is expected to drive the area well beyond ignition energies.
The overall efficiency of this approach is many times that of the conventional approach. In the case of NIF the laser generates about 4 MJ of infrared power to create ignition that releases about 20 MJ of energy. This corresponds to a "fusion gain" —the ratio of input laser power to output fusion power— of about 5. If one uses the baseline assumptions for the current HiPER design, the two lasers (driver and heater) produce about 270 kJ in total, yet generate 25 to 30 MJ, a gain of about 100. Considering a variety of losses, the actual gain is predicted to be around 72. Not only does this outperform NIF by a wide margin, the smaller lasers are much less expensive to build. In terms of power-for-cost, HiPER is expected to be about an order of magnitude less expensive than conventional devices like NIF.
Compression is already a fairly well-understood problem, and HiPER is primarily interested in exploring the precise physics of the rapid heating process. It is not clear how quickly the electrons stop in the fuel load; while this is known for matter under normal pressures, it's not for the ultra-dense conditions of the compressed fuel. To work efficiently, the electrons should stop in as short a distance as possible, to release their energy into a small spot and thus raise the temperature (energy per unit volume) as high as possible.
How to get the laser light onto that spot is also a matter for further research. One approach uses a short pulse from another laser to heat the plasma outside the dense "core", essentially burning a hole through it and exposing the dense fuel inside. This approach will be tested on the OMEGA-EP system in the US. Another approach, tested successfully on the GEKKO XII laser in Japan, uses a small gold cone that cuts through a small area of the target shell; on heating no plasma is created in this area, leaving a hole that can be aimed into by shining the laser into the inner surface of the cone. HiPER is currently planning on using the gold cone approach, but will likely study the burning solution as well.
Related research
In 2005 HiPER completed a preliminary study outlining possible approaches and arguments for its construction. The report received positive reviews from the EC in July 2007, and moved onto a preparatory design phase in early 2008 with detailed designs for construction beginning in 2011 or 2012.
In parallel, the HiPER project also proposes to build smaller laser systems with higher repetition rates. The high-powered flash lamps used to pump the laser amplifier glass causes it to deform, and it cannot be fired again until it cools off, which takes as long as a day. Additionally only a very small amount of the flash of white light generated by the tubes is of the right frequency to be absorbed by the Nd:glass and thus lead to amplification, in general only about 1 to 1.5% of the energy fed into the tubes ends up in the laser beam.
Key to avoiding these problems is replacing the flash lamps with more efficient pumps, typically based on laser diodes. These are far more efficient at generating light from electricity, and thus run much cooler. More importantly, the light they do generate is fairly monochromatic and can be tuned to frequencies that can be easily absorbed. This means that much less power needs to be used to produce any particular amount of laser light, further reducing the overall amount of heat being generated. The improvement in efficiency can be dramatic; existing experimental devices operate at about 10% overall efficiency, and it is believed "near term" devices will improve this as high as 20%.
Current status
Further research in the fast ignition approach cast serious doubt on its future. By 2013, the US National Academy of Sciences concluded that it was no longer a worthwhile research direction, stating "At this time, fast ignition appears to be a less promising approach for IFE than other ignition concepts."
See also
Laser Mégajoule
References
Bibliography
Mike Dunne et al., "HiPER Technical Background and Conceptual Design Report 2007", June 2007
Mike Dunne et al., "HiPER: a laser fusion facility for Europe", 2005
Edwin Cartlidge, "Europe plans laser-fusion facility", Physics World, 2 September 2005
External links
HiPER Project – Project home page
Fast track to fusion – includes an image of the gold-cone approach
Hydrodynamic Instability Experiments at the GEKKO XII/HIPER Laser – the Japanese experiment of the same name, for comparison
Laser vision fuels energy future – BBC news report
Professor Mike Dunne, Director of the UK's Central Laser Facility, on European plans for creating fusion energy, Ingenia magazine, December 2007
HiPER Power – Article on physics.org, August 2009
Nuclear research institutes
Inertial confinement fusion research lasers
Energy in the European Union | HiPER | [
"Engineering"
] | 3,214 | [
"Nuclear research institutes",
"Nuclear organizations"
] |
7,401,552 | https://en.wikipedia.org/wiki/Bioastronautics | Bioastronautics is a specialty area of biological and astronautical research which encompasses numerous aspects of biological, behavioral, and medical concern governing humans and other living organisms in outer space; and includes the design of space vehicle payloads, space habitats, and life-support systems. In short, it spans the study and support of life in space.
Bioastronautics includes many similarities with its sister discipline astronautical hygiene; they both study the hazards that humans may encounter during a space flight. However, astronautical hygiene differs in many respects e.g. in this discipline, once a hazard is identified, the exposure risks are then assessed and the most effective measures determined to prevent or control exposure and thereby protect the health of the astronaut. Astronautical hygiene is an applied scientific discipline that requires knowledge and experience of many fields including bioastronautics, space medicine, ergonomics etc. The skills of astronautical hygiene are already being applied for example, to characterise Moon dust and design the measures to mitigate exposure during lunar exploration, to develop accurate chemical monitoring techniques and use the results in the setting SMACs.
Of particular interest from a biological perspective are the effects of reduced gravitational force felt by inhabitants of spacecraft. Often referred to as "microgravity", the lack of sedimentation, buoyancy, or convective flows in fluids results in a more quiescent cellular and intercellular environment primarily driven by chemical gradients. Certain functions of organisms are mediated by gravity, such as gravitropism in plant roots and negative gravitropism in plant stems, and without this stimulus growth patterns of organisms onboard spacecraft often diverge from their terrestrial counterparts. Additionally, metabolic energy normally expended in overcoming the force of gravity remains available for other functions. This may take the form of accelerated growth in organisms as diverse as worms like C. elegans to miniature parasitoid wasps such as Spangia endius. It may also be used in the augmented production of secondary metabolites such as the vinca alkaloids Vincristine and Vinblastine in the rosy periwinkle (Catharanthus roseus), whereby space grown specimens often have higher concentrations of these constituents that on earth are present in only trace amounts.
Engineering considerations
From an engineering perspective, facilitating the delivery and exchange of air, food, and water, and the processing of waste products is also challenging. The transition from expendable physicochemical methods to sustainable bioregenerative systems that function as a robust miniature ecosystem is another goal of bioastronautics in facilitating long duration space travel. Such systems are often termed Closed Ecological Life Support Systems (CELSS).
Medical considerations
From a medical perspective, long duration space flight also has physiological impacts on astronauts. Accelerated bone decalcification, similar to osteopenia and osteoporosis on Earth, is just one such condition. Another serious concern is the effects of space travel upon the kidneys. Current estimates of these effects upon the kidneys indicates that unless some kind of effective additional remedial technology against kidney damage is employed, astronauts who have been exposed to micro-gravity, reduced gravity, and Galactic radiation for 3 years or so on a Mars mission may have to return to Earth while attached to dialysis machines. The study of the potential effects of space travel is useful not only for advancing methods of the safe habitation of space, and the travel through space, but also in uncovering ways to more effectively treat closely related terrestrial ailments.
NASA's Bioastronautics library
NASA's Johnson Space Center in Houston, Texas maintains a Bioastronautics Library. The one-room facility provides a collection of textbooks, reference books, conference proceedings, and academic journals related to bioastronautics topics. Because the library is located within secure government property (not part of Space Center Houston, the official visitors center of JSC), it is not generally accessible to the public.
See also
Effect of spaceflight on the human body
Life support system
Space habitation
Locomotion in space
Reduced muscle mass, strength and performance in space
Space food
Astronautical hygiene
Spaceflight radiation carcinogenesis
Space medicine
Sex in space
Space tourism
Space-based economy
List of spaceflight-related accidents and incidents
Writing in space
Space art#Art in space
Religion in space
Organisms at high altitude
Astrobiology
Astrobotany
Plants in space
References
External links
Harvard-MIT Health Sciences and Technology - Bioastronautics Training Program (HST-Bioastro)
NASA's Bioastronautics Roadmap
University of Colorado at Boulder Bioastronautics Research Group
The American Society for Gravitational and Space Biology (ASGSB)
1965 radio series titled Their Other World, 13 half-hour episodes with typed transcript .
Aviation medicine
Human spaceflight
Biological engineering
Space medicine | Bioastronautics | [
"Engineering",
"Biology"
] | 988 | [
"Biological engineering"
] |
7,407,236 | https://en.wikipedia.org/wiki/TIM/TOM%20complex | The TIM/TOM complex is a protein complex in cellular biochemistry which translocates proteins produced from nuclear DNA through the mitochondrial membrane for use in oxidative phosphorylation. In enzymology, the complex is described as an mitochondrial protein-transporting ATPase (), or more systematically ATP phosphohydrolase (mitochondrial protein-importing), as the TIM part requires ATP hydrolysis to work.
Only 13 proteins necessary for a mitochondrion are actually coded in mitochondrial DNA. The vast majority of proteins destined for the mitochondria are encoded in the nucleus and synthesised in the cytoplasm. These are tagged by an N-terminal or/and a C-terminal signal sequence. Following transport through the cytosol from the nucleus, the signal sequence is recognized by a receptor protein in the translocase of the outer membrane (TOM) complex. The signal sequence and adjacent portions of the polypeptide chain are inserted in the TOM complex, then begin interaction with a translocase of the inner membrane (TIM) complex, which are hypothesized to be transiently linked at sites of close contact between the two membranes. The signal sequence is then translocated into the matrix in a process that requires an electrochemical hydrogen ion gradient across the inner membrane. Mitochondrial Hsp70 binds to regions of the polypeptide chain and maintains it in an unfolded state as it moves into the matrix.
The ATPase domain is essential during the interactions of the proteins Hsp70 and subunit Tim44. Without the presence of ATPase, carboxy-terminal segment is not able to bind to protein of Tim44. As mtHsp70 transmits the nucleotide state of the ATPase domain with alpha-helices A and B, Tim44 interacts with the peptide binding domain to coordinate the protein bind.
TIC/TOC Complex vs. TIM/TOM Complex
This protein complex is functionally analogous to the TIC/TOC complex located on the inner and outer membranes of the chloroplast, in the sense that it transports proteins into the membrane of the mitochondria. Although they both hydrolyze triphosphates, they are evolutionally unrelated.
References
External links
TCDB 3.A.8 - description of the entire complex
Overview of the various import ways into mitochondria (group of N. Pfanner)
Transport proteins
Mitochondria
Transmembrane proteins
EC 3.6.3
EC 7.4.2
Enzymes of unknown structure | TIM/TOM complex | [
"Chemistry"
] | 524 | [
"Mitochondria",
"Metabolism"
] |
5,667,589 | https://en.wikipedia.org/wiki/Nanocomposite | Nanocomposite is a multiphase solid material where one of the phases has one, two or three dimensions of less than 100 nanometers (nm) or structures having nano-scale repeat distances between the different phases that make up the material.
In the broadest sense this definition can include porous media, colloids, gels and copolymers, but is more usually taken to mean the solid combination of a bulk matrix and nano-dimensional phase(s) differing in properties due to dissimilarities in structure and chemistry. The mechanical, electrical, thermal, optical, electrochemical, catalytic properties of the nanocomposite will differ markedly from that of the component materials. Size limits for these effects have been proposed:
<5 nm for catalytic activity
<20 nm for making a hard magnetic material soft
<50 nm for refractive index changes
<100 nm for achieving superparamagnetism, mechanical strengthening or restricting matrix dislocation movement
Nanocomposites are found in nature, for example in the structure of the abalone shell and bone. The use of nanoparticle-rich materials long predates the understanding of the physical and chemical nature of these materials. Jose-Yacaman et al. investigated the origin of the depth of colour and the resistance to acids and bio-corrosion of Maya blue paint, attributing it to a nanoparticle mechanism. From the mid-1950s nanoscale organo-clays have been used to control flow of polymer solutions (e.g. as paint viscosifiers) or the constitution of gels (e.g. as a thickening substance in cosmetics, keeping the preparations in homogeneous form). By the 1970s polymer/clay composites were the topic of textbooks, although the term "nanocomposites" was not in common use.
In mechanical terms, nanocomposites differ from conventional composite materials due to the exceptionally high surface to volume ratio of the reinforcing phase and/or its exceptionally high aspect ratio. The reinforcing material can be made up of particles (e.g. minerals), sheets (e.g. exfoliated clay stacks) or fibres (e.g. carbon nanotubes or electrospun fibres). The area of the interface between the matrix and reinforcement phase(s) is typically an order of magnitude greater than for conventional composite materials. The matrix material properties are significantly affected in the vicinity of the reinforcement. Ajayan et al. note that with polymer nanocomposites, properties related to local chemistry, degree of thermoset cure, polymer chain mobility, polymer chain conformation, degree of polymer chain ordering or crystallinity can all vary significantly and continuously from the interface with the reinforcement into the bulk of the matrix.
This large amount of reinforcement surface area means that a relatively small amount of nanoscale reinforcement can have an observable effect on the macroscale properties of the composite. For example, adding carbon nanotubes improves the electrical and thermal conductivity. Other kinds of nanoparticulates may result in enhanced optical properties, dielectric properties, heat resistance or mechanical properties such as stiffness, strength and resistance to wear and damage. In general, the nano reinforcement is dispersed into the matrix during processing. The percentage by weight (called mass fraction) of the nanoparticulates introduced can remain very low (on the order of 0.5% to 5%) due to the low filler percolation threshold, especially for the most commonly used non-spherical, high aspect ratio fillers (e.g. nanometer-thin platelets, such as clays, or nanometer-diameter cylinders, such as carbon nanotubes). The orientation and arrangement of asymmetric nanoparticles, thermal property mismatch at the interface, interface density per unit volume of nanocomposite, and polydispersity of nanoparticles significantly affect the effective thermal conductivity of nanocomposites.
Ceramic-matrix nanocomposites
Ceramic matrix composites (CMCs) consist of ceramic fibers embedded in a ceramic matrix. The matrix and fibers can consist of any ceramic material, including carbon and carbon fibers. The ceramic occupying most of the volume is often from the group of oxides, such as nitrides, borides, silicides, whereas the second component is often a metal. Ideally both components are finely dispersed in each other in order to elicit particular optical, electrical and magnetic properties as well as tribological, corrosion-resistance and other protective properties.
The binary phase diagram of the mixture should be considered in designing ceramic-metal nanocomposites and measures have to be taken to avoid a chemical reaction between both components. The last point mainly is of importance for the metallic component that may easily react with the ceramic and thereby lose its metallic character. This is not an easily obeyed constraint because the preparation of the ceramic component generally requires high process temperatures. The safest measure thus is to carefully choose immiscible metal and ceramic phases. A good example of such a combination is represented by the ceramic-metal composite of TiO2 and Cu, the mixtures of which were found immiscible over large areas in the Gibbs’ triangle of ' Cu-O-Ti.
The concept of ceramic-matrix nanocomposites was also applied to thin films that are solid layers of a few nm to some tens of μm thickness deposited upon an underlying substrate and that play an important role in the functionalization of technical surfaces. Gas flow sputtering by the hollow cathode technique turned out as a rather effective technique for the preparation of nanocomposite layers. The process operates as a vacuum-based deposition technique and is associated with high deposition rates up to some μm/s and the growth of nanoparticles in the gas phase. Nanocomposite layers in the ceramics range of composition were prepared from TiO2 and Cu by the hollow cathode technique that showed a high mechanical hardness, small coefficients of friction and a high resistance to corrosion.
Metal-matrix nanocomposites
Metal matrix nanocomposites can also be defined as reinforced metal matrix composites. This type of composites can be classified as continuous and non-continuous reinforced materials. One of the more important nanocomposites is Carbon nanotube metal matrix composites, which is an emerging new material that is being developed to take advantage of the high tensile strength and electrical conductivity of carbon nanotube materials. Critical to the realization of CNT-MMC possessing optimal properties in these areas are the development of synthetic techniques that are (a) economically producible, (b) provide for a homogeneous dispersion of nanotubes in the metallic matrix, and (c) lead to strong interfacial adhesion between the metallic matrix and the carbon nanotubes. In addition to carbon nanotube metal matrix composites, boron nitride reinforced metal matrix composites and carbon nitride metal matrix composites are the new research areas on metal matrix nanocomposites.
A recent study, comparing the mechanical properties (Young's modulus, compressive yield strength, flexural modulus and flexural yield strength) of single- and multi-walled reinforced polymeric (polypropylene fumarate—PPF) nanocomposites to tungsten disulfide nanotubes reinforced PPF nanocomposites suggest that tungsten disulfide nanotubes reinforced PPF nanocomposites possess significantly higher mechanical properties and tungsten disulfide nanotubes are better reinforcing agents than carbon nanotubes. Increases in the mechanical properties can be attributed to a uniform dispersion of inorganic nanotubes in the polymer matrix (compared to carbon nanotubes that exist as micron sized aggregates) and increased crosslinking density of the polymer in the presence of tungsten disulfide nanotubes (increase in crosslinking density leads to an increase in the mechanical properties). These results suggest that inorganic nanomaterials, in general, may be better reinforcing agents compared to carbon nanotubes.
Another kind of nanocomposite is the energetic nanocomposite, generally as a hybrid sol–gel with a silica base, which, when combined with metal oxides and nano-scale aluminum powder, can form superthermite materials.
Polymer-matrix nanocomposites
In the simplest case, appropriately adding nanoparticulates to a polymer matrix can enhance its performance, often dramatically, by simply capitalizing on the nature and properties of the nanoscale filler (these materials are better described by the term nanofilled polymer composites). This strategy is particularly effective in yielding high performance composites, when uniform dispersion of the filler is achieved and the properties of the nanoscale filler are substantially different or better than those of the matrix. The uniformity of the dispersion is in all nanocomposites is counteracted by thermodynamically driven phase separation. Clustering of nanoscale fillers produces aggregates that serve as structural defects and result in failure. Layer-by-layer (LbL) assembly when nanometer scale layers of nanoparticulates and a polymers are added one by one. LbL composites display performance parameters 10-1000 times better that the traditional nanocomposites made by extrusion or batch-mixing.
Nanoparticles such as graphene, carbon nanotubes, molybdenum disulfide and tungsten disulfide are being used as reinforcing agents to fabricate mechanically strong biodegradable polymeric nanocomposites for bone tissue engineering applications. The addition of these nanoparticles in the polymer matrix at low concentrations (~0.2 weight %) cause significant improvements in the compressive and flexural mechanical properties of polymeric nanocomposites. Potentially, these nanocomposites may be used as a novel, mechanically strong, light weight composite as bone implants. The results suggest that mechanical reinforcement is dependent on the nanostructure morphology, defects, dispersion of nanomaterials in the polymer matrix, and the cross-linking density of the polymer. In general, two-dimensional nanostructures can reinforce the polymer better than one-dimensional nanostructures, and inorganic nanomaterials are better reinforcing agents than carbon based nanomaterials. In addition to mechanical properties, polymer nanocomposites based on carbon nanotubes or graphene have been used to enhance a wide range of properties, giving rise to functional materials for a wide range of high added value applications in fields such as energy conversion and storage, sensing and biomedical tissue engineering. For example, multi-walled carbon nanotubes based polymer nanocomposites have been used for the enhancement of the electrical conductivity.
An alternative route to synthesis of nanocomposites is sequential infiltration synthesis, in which inorganic nanomaterials are grown within polymeric substrates using vapor-phase precursors that diffuse into the matrix. Furthermore, nanocomposites can be prepared via in situ generation of nanoparticles on and within polymeric materials, an approach that relies on the chemical transformation of suitable precursors to targeted nanoparticles synchronous with the build-up of the nanohybrid systems. The in situ-generated nanoparticles tend to nucleate and grow on the active sites of the macromolecular chains, showing strong adhesion on the polymeric host.
Nanoscale dispersion of filler or controlled nanostructures in the composite can introduce new physical properties and novel behaviors that are absent in the unfilled matrices. This effectively changes the nature of the original matrix (such composite materials can be better described by the term genuine nanocomposites or hybrids). Some examples of such new properties are fire resistance or flame retardancy, and accelerated biodegradability.
A range of polymeric nanocomposites are used for biomedical applications such as tissue engineering, drug delivery, cellular therapies. Due to unique interactions between polymer and nanoparticles, a range of property combinations can be engineered to mimic native tissue structure and properties. A range of natural and synthetic polymers are used to design polymeric nanocomposites for biomedical applications including starch, cellulose, alginate, chitosan, collagen, gelatin, and fibrin, poly(vinyl alcohol) (PVA), poly(ethylene glycol) (PEG), poly(caprolactone) (PCL), poly(lactic-co-glycolic acid) (PLGA), and poly(glycerol sebacate) (PGS). A range of nanoparticles including ceramic, polymeric, metal oxide and carbon-based nanomaterials are incorporated within polymeric network to obtain desired property combinations.
Magnetic nanocomposites
Nanocomposites that can respond to an external stimulus are of increased interest due to the fact that, because of the large amount of interaction between the phase interfaces, the stimulus response can have a larger effect on the composite as a whole. The external stimulus can take many forms, such as a magnetic, electrical, or mechanical field. Specifically, magnetic nanocomposites are useful for use in these applications due to the nature of magnetic material's ability to respond both to electrical and magnetic stimuli. The penetration depth of a magnetic field is also high, leading to an increased area that the nanocomposite is affected by and therefore an increased response. In order to respond to a magnetic field, a matrix can be easily loaded with nanoparticles or nanorods The different morphologies for magnetic nanocomposite materials are vast, including matrix dispersed nanoparticles, core-shell nanoparticles,
colloidal crystals, macroscale spheres, or Janus-type nanostructures.
Magnetic nanocomposites can be utilized in a vast number of applications, including catalytic, medical, and technical. For example, palladium is a common transition metal used in catalysis reactions. Magnetic nanoparticle-supported palladium complexes can be used in catalysis to increase the efficiency of the palladium in the reaction.
Magnetic nanocomposites can also be utilized in the medical field, with magnetic nanorods embedded in a polymer matrix can aid in more precise drug delivery and release. Finally, magnetic nanocomposites can be used in high frequency/high-temperature applications. For example, multi-layer structures can be fabricated for use in electronic applications. An electrodeposited Fe/Fe oxide multi-layered sample can be an example of this application of magnetic nanocomposites.
In applications such as power micro-inductors where high magnetic permeability is desired at high operating frequencies. The traditional micro-fabricated magnetic core materials see both decrease in permeability and high losses at high operating frequency. In this case, magnetic nano composites have great potential for improving the efficiency of power electronic devices by providing relatively high permeability and low losses. For example, As Iron oxide nano particles embedded in Ni matrix enables us to mitigate those losses at high frequency. The high resistive iron oxide nanoparticles helps to reduce the eddy current losses where as the Ni metal helps in attaining high permeability. DC magnetic properties such as Saturation magnetization lies between each of its constituent parts indicating that the physical properties of the materials can be altered by creating these nanocomposites.
Heat resistant nanocomposites
In the recent years nanocomposites have been designed to withstand high temperatures by the addition of Carbon Dots (CDs) in the polymer matrix. Such nanocomposites can be utilized in environments wherein high temperature resistance is a prime criterion.
See also
Hybrid materials
Aquamelt
References
Further reading
Nanomaterials
Solid-state chemistry | Nanocomposite | [
"Physics",
"Chemistry",
"Materials_science"
] | 3,336 | [
"Condensed matter physics",
"nan",
"Nanotechnology",
"Nanomaterials",
"Solid-state chemistry"
] |
5,667,758 | https://en.wikipedia.org/wiki/Signal-to-noise%20statistic | In mathematics the signal-to-noise statistic distance between two vectors a and b with mean values and and standard deviation and respectively is:
In the case of Gaussian-distributed data and unbiased class distributions, this statistic can be related to classification accuracy given an ideal linear discrimination, and a decision boundary can be derived.
This distance is frequently used to identify vectors that have significant difference. One usage is in bioinformatics to locate genes that are differential expressed on microarray experiments.
See also
Distance
Uniform norm
Manhattan distance
Signal-to-noise ratio
Signal to noise ratio (imaging)
Notes
Statistical distance
Statistical ratios | Signal-to-noise statistic | [
"Physics"
] | 129 | [
"Physical quantities",
"Statistical distance",
"Distance"
] |
5,668,914 | https://en.wikipedia.org/wiki/JERRV | A JERRV (Joint EOD Rapid Response Vehicle) is any vehicle that United States Explosive Ordnance Disposal (EOD) units use in war zones such as Iraq.
EOD application
These vehicles are used to safely transport EOD operators, supplies, and equipment, including remotely controlled robots (TALON and PackBot), bomb suits, and explosives. JERRVs are more resistant to the effects of landmines, improvised explosive devices (IEDs), and small arms than soft armored vehicles like Humvees. The JERRV is designed to deflect blasts. They are in some ways like heavier versions of armored cars.
Development
The JERRV was the natural follow-on to the earlier USMC-directed purchase of some 30 Hardened Engineer Vehicles (HEV). HEV was an urgent UNS program which resulted in an order being placed with Technical Solutions Group (TSG) in Ladson in April 2004. The original HEV requirement document (written on less than a single side of paper) called for some quite specific characteristics which were a major factor in the design of a new vehicle which was called 'Cougar' only to provide a degree of continuity for the user community. The new Cougar was designated Cougar H to differentiate it from the earlier of lightweight and non-military vehicle which had been imported from South Africa by TSG.
The designer was a British ex-army officer who had been asked to help out TSG in previous years and who offered to design a new vehicle when the USMC approached the company with their requirement. His own experiences, as well as a desire to distance TSG from its former South African partners, led to a policy from the outset of creating a new vehicle which would address many of the deficiencies of the older designs as well as to meet first world standards of protection, performance and sustainability. The design team was small - as was the USMC purchasing team - and consisted of the designer plus two other engineers and an automotive supply engineer who specified and purchase the running gear from Peterbilt dealer Rush Crane.
At the time, a few of the old South African designers tried to get involved and persuade TSG (led at that time by Mike Watts), to force the designer to abandon many of the new features. Watts' main contribution to the development of the modern mine protected vehicle (and one which should not be overlooked) was perhaps that he resisted all such pressures and kept these people away from the design team. In order to control the public utterances of some of the main critics, consultancies were awarded by TSG which allowed a degree of commercial confidentiality to be imposed.
Major differences from the older designs included a vertical hull side to increase internal volume, a full-length bottom plate to increase strength and to provide blast and ballistic protection for the engine, full US-specification engine, cooling, power-train etc., sufficient payload to provide ballistic-protection upgrades and so on. Ergonomics were based upon first-world standards as were protection levels and automotive specifications (earlier South Africa designs paid scant regard for these, applying their own local standards which caused some problems when operated by NATO countries).
During the design and construction of the first vehicle, considerable use was made of the carpentry team who worked closely with the design team and frequently led the whole process. They constructed many items in plywood and the engineering team measured the mock-ups and drew them in CAD.
At the same time as the main 4x4 version was being developed, the designer pressed the need for a 6x6 version to reduce ground pressures and axle loads; other versions which were designed and mocked up in wood were a flatbed variant (intended to meet the USMC requirement for a lightweight prime mover system capable of pulling a 155mm howitzer and carrying the crew plus ammunition), an ambulance and a command vehicle. The first HEV was delivered to the USMC in Sep/Oct 2004, less than six months after contract award - at which time the design was still a couple of sketches on a pad.
By September 2004, the US Army had shown interest in Cougar and sent its IED/EOD experts to Charleston, to talk to the design team. The designer agreed to modify the vehicle to make better use of in-service equipment and changed the engine to the military version of the CAT C-7 2136 - increasing from 300 hp to 330 hp and making its electrical system 24 volt. Based upon these assurances, the Army decided to combine with the USMC and order the Joint EOD Rapid Response Vehicle.
References
See also
MRAP (armored vehicle)
Military trucks
Bomb disposal
Armored personnel carriers of the United States | JERRV | [
"Chemistry"
] | 955 | [
"Explosion protection",
"Bomb disposal"
] |
5,670,370 | https://en.wikipedia.org/wiki/Specific%20speed | Specific speed Ns, is used to characterize turbomachinery speed. Common commercial and industrial practices use dimensioned versions which are of equal utility. Specific speed is most commonly used in pump applications to define the suction specific speed —a quasi non-dimensional number that categorizes pump impellers as to their type and proportions. In Imperial units it is defined as the speed in revolutions per minute at which a geometrically similar impeller would operate if it were of such a size as to deliver one gallon per minute against one foot of hydraulic head. In metric units flow may be in l/s or m3/s and head in m, and care must be taken to state the units used.
Performance is defined as the ratio of the pump or turbine against a reference pump or turbine, which divides the actual performance figure to provide a unitless figure of merit. The resulting figure would more descriptively be called the "ideal-reference-device-specific performance." This resulting unitless ratio may loosely be expressed as a "speed," only because the performance of the reference ideal pump is linearly dependent on its speed, so that the ratio of [device-performance to reference-device-performance] is also the increased speed at which the reference device would need to operate, in order to produce the performance, instead of its reference speed of "1 unit."
Specific speed is an index used to predict desired pump or turbine performance. i.e. it predicts the general shape of a pump's impeller. It is this impeller's "shape" that predicts its flow and head characteristics so that the designer can then select a pump or turbine most appropriate for a particular application. Once the desired specific speed is known, basic dimensions of the unit's components can be easily calculated.
Several mathematical definitions of specific speed (all of them actually ideal-device-specific) have been created for different devices and applications.
Pump specific speed
Low-specific speed radial flow impellers develop hydraulic head principally through centrifugal force. Pumps of higher specific speeds develop head partly by centrifugal force and partly by axial force. An axial flow or propeller pump with a specific speed of 10,000 or greater generates its head exclusively through axial forces. Radial impellers are generally low flow/high head designs whereas axial flow impellers are high flow/low head designs. In theory, the discharge of a "purely" centrifugal machine (pump, turbine, fan, etc.) is tangential to the rotation of the impeller whereas a "purely" axial-flow machine's discharge will be parallel to the axis of rotation. There are also machines that exhibit a combination of both properties and are specifically referred to as "mixed-flow" machines.
Centrifugal pump impellers have specific speed values ranging from 500 to 10,000 (English units), with radial flow pumps at 500 to 4,000, mixed flow at 2,000 to 8,000, and axial flow pumps at 7,000 to 20,000. Values of specific speed less than 500 are associated with positive displacement pumps.
As the specific speed increases, the ratio of the impeller outlet diameter to the inlet or eye diameter decreases. This ratio becomes 1.0 for a true axial flow impeller.
The following equation gives a dimensionless specific speed:
where:
is specific speed (dimensionless)
is pump rotational speed (rad/sec)
is flowrate (m3/s) at the point of best efficiency
is total head (m) per stage at the point of best efficiency
Note that the units used affect the specific speed value in the above equation and consistent units should be used for comparisons. Pump specific speed can be calculated using British gallons or using Metric units (m3/s and metres head), changing the values listed above.
Suction specific speed
The suction specific speed is mainly used to see if there will be problems with cavitation during the pump's operation on the suction side. It is defined by centrifugal and axial pumps' inherent physical characteristics and operating point. The suction specific speed of a pump will define the range of operation in which a pump will experience stable operation. The higher the suction specific speed, then the smaller the range of stable operation, up to the point of cavitation at 8500 (unitless). The envelope of stable operation is defined in terms of the best efficiency point of the pump.
The suction specific speed is defined as:
where:
suction specific speed
rotational speed of pump in rpm
flow of pump in US gallons per minute
Net positive suction head (NPSH) required in feet at pump's best efficiency point
Turbine specific speed
The specific speed value for a turbine is the speed of a geometrically similar turbine which would produce unit power (one kilowatt) under unit head (one meter). The specific speed of a turbine is given by the manufacturer (along with other ratings) and will always refer to the point of maximum efficiency. This allows accurate calculations to be made of the turbine's performance for a range of heads.
Well-designed efficient machines typically use the following values: Impulse turbines have the lowest ns values, typically ranging from 1 to 10, a Pelton wheel is typically around 4, Francis turbines fall in the range of 10 to 100, while Kaplan turbines are at least 100 or more, all in imperial units.
Deriving the Turbine Specific Speed
To derive the Turbine specific speed equation we first start with the Power formula for water then using proportionalities with η,ρ, and g being constant they can be removed. The power of the turbine is therefore only dependent on the head H and flow Q.
so
let:
= Diameter of the turbine runner
= Width of the turbine runner
= Speed of the turbine (rpm)
= Tangential velocity of the turbine blade (m/s)
= Specific Speed of the Turbine
= Velocity of water at turbine (m/s)
Now utilising the constant speed ratio at the turbine tip, the following proportionality can be made that the tangential velocity of the turbine blade is proportional to the square root of the head.
Speed ratio
so
But from rotational speed in RPM to linear speed in m/s the following equation and proportionality can be made.
so
The flow through a turbine is the product of flow velocity and area so the flow through a turbine can be quantified.
with
and as shown previously:
So using the above 2, the following is obtained
By combining the equation for diameter and tangential speed, with tangential speed and head a relationship between flow and head can be reached.
Substituting this back into the power equation gives:
To convert this proportionality into an equation a factor of proportionality, say K, must be introduced which gives:
Now assuming our original proposition of producing 1 kilowatt at 1m head our speed N becomes our specific speed . So substituting these values into our equation gives:
Now we know we have a complete formula for specific speed,:
So rearranging for Specific Speed give the final following result:
where:
= Wheel speed (rpm)
= Power (kW)
= Water head (m)
English units
Expressed in English units, the "specific speed" is defined as ns = n /h5/4
where n is the wheel speed in rpm
P is the power in horsepower
h is the water head in feet
Metric units
Expressed in metric units, the "specific speed" is ns = 0.2626 n /h5/4
where n is the wheel speed in rpm
P is the power in kilowatts
h is the water head in meters
The factor 0.2626 is only required when the specific speed is to be adjusted to English units. In countries which use the metric system, the factor is omitted, and quoted specific speeds are correspondingly larger.
Example
Given a flow and head for a specific hydro site, and the RPM requirement of the generator, calculate the specific speed. The result is the main criteria for turbine selection or the starting point for analytical design of a new turbine. Once the desired specific speed is known, basic dimensions of the turbine parts can be easily calculated.
Turbine calculations:
= Runner diameter (m)
See also
Pump
Net positive suction head
Water turbine
References
Hydraulics
Fluid dynamics
Pumps | Specific speed | [
"Physics",
"Chemistry",
"Engineering"
] | 1,688 | [
"Pumps",
"Turbomachinery",
"Chemical engineering",
"Physical systems",
"Hydraulics",
"Piping",
"Fluid dynamics"
] |
5,670,581 | https://en.wikipedia.org/wiki/System%20of%20systems%20engineering | System of systems engineering (SoSE) is a set of developing processes, tools, and methods for designing, re-designing and deploying solutions to system-of-systems challenges.
Overview
System of Systems Engineering (SoSE) methodology is heavily used in U.S. Department of Defense applications, but is increasingly being applied to non-defense related problems such as architectural design of problems in air and auto transportation, healthcare, global communication networks, search and rescue, space exploration, industry 4.0 and many other System of Systems application domains. SoSE is more than systems engineering of monolithic, complex systems because design for System-of-Systems problems is performed under some level of uncertainty in the requirements and the constituent systems, and it involves considerations in multiple levels and domains. Whereas systems engineering focuses on building the system right, SoSE focuses on choosing the right system(s) and their interactions to satisfy the requirements.
System-of-Systems Engineering and Systems Engineering are related but different fields of study. Whereas systems engineering addresses the development and operations of monolithic products, SoSE addresses the development and operations of evolving programs. In other words, traditional systems engineering seeks to optimize an individual system (i.e., the product), while SoSE seeks to optimize network of various interacting legacy and new systems brought together to satisfy multiple objectives of the program. SoSE should enable the decision-makers to understand the implications of various choices on technical performance, costs, extensibility and flexibility over time; thus, effective SoSE methodology should prepare decision-makers to design informed architectural solutions for System-of-Systems problems.
Due to varied methodology and domains of applications in existing literature, there does not exist a single unified consensus for processes involved in System-of-Systems Engineering. One of the proposed SoSE frameworks, by Dr. Daniel A. DeLaurentis, recommends a three-phase method where a SoS problem is defined (understood), abstracted, modeled and analyzed for behavioral patterns. More information on this method and other proposed methods can be found in the listed SoSE focused organizations and SoSE literature in the subsequent sections.
See also
Enterprise systems engineering
System of systems
Enterprise architecture
References
Further reading
Kenneth Cureton, F. Stan Settlers, "System-of-Systems Architecting: Educational Findings and Implications," 2005 IEEE International Conference on Systems, Man and Cybernetics, Waikoloa, Hawaii, October 10–12, 2005. pp. 2726–2731.
Mo Jamshidi, "System-of-Systems Engineering — A Definition," IEEE SMC 2005, Big Island, Hawaii, URL: http://ieeesmc2005.unm.edu/SoSE_Defn.htm
Saurabh Mittal, Jose L. Risco Martin, "Netcentric System of Systems Engineering with DEVS Unified Process", CRC Press, Boca Raton, Florida, 2013 URL:http://www.crcpress.com/product/isbn/9781439827062
Charles Keating, Ralph Rogers, Resit Unal, David Dryer, et al. "System of Systems Engineering," Engineering Management Journal, Vol. 15, no. 3, pp. 36.
Charles Keating, "Research Foundations for System of Systems Engineering," 2005 IEEE International Conference on Systems, Man and Cybernetics, Waikoloa, Hawaii, October 10–12, 2005. pp. 2720–2725.
Jack Ring, Azad Madni, "Key Challenges and Opportunities in 'System of Systems' Engineering," 2005 IEEE International Conference on Systems, Man and Cybernetics, Waikoloa, Hawaii, October 10–12, 2005. pp. 973–978.
R.E. Raygan, "Configuration management in a system-of-systems environment delivering IT services," 2007 IEEE International Engineering Management Conference, Austin, Texas, July 29, 2007-Aug, 1 2007. pp. 330 – 335.
D. Luzeaux & JR Ruault, "Systems of Systems", ISTE Ltd and John Wiley & Sons Inc, 2010
D. Luzeaux, JR Ruault & JL Wippler, "Complex System and Systems of Systems Engineering", ISTE Ltd and John Wiley & Sons Inc, 2011
External links
System of Systems Signature Area at Purdue University's College of Engineering (Apr 2015 - content no longer specific to System of Systems)
National Centers for System of Systems Engineering at Old Dominion University (Apr 2015 - content blocked)
Center for Intelligent Networked Systems at Stevens Institute of Technology (Apr 2015 - page timed out, presumed to no longer exist)
System of Systems Engineering Center of Excellence (Apr 2015 - no SOSE content)
Systems engineering
Complex systems theory | System of systems engineering | [
"Engineering"
] | 978 | [
"Systems engineering"
] |
5,670,694 | https://en.wikipedia.org/wiki/Laboratory%20automation | Laboratory automation is a multi-disciplinary strategy to research, develop, optimize and capitalize on technologies in the laboratory that enable new and improved processes. Laboratory automation professionals are academic, commercial and government researchers, scientists and engineers who conduct research and develop new technologies to increase productivity, elevate experimental data quality, reduce lab process cycle times, or enable experimentation that otherwise would be impossible.
The most widely known application of laboratory automation technology is laboratory robotics. More generally, the field of laboratory automation comprises many different automated laboratory instruments, devices (the most common being autosamplers), software algorithms, and methodologies used to enable, expedite and increase the efficiency and effectiveness of scientific research in laboratories.
The application of technology in today's laboratories is required to achieve timely progress and remain competitive. Laboratories devoted to activities such as high-throughput screening, combinatorial chemistry, automated clinical and analytical testing, diagnostics, large-scale biorepositories, and many others, would not exist without advancements in laboratory automation. Some universities offer entire programs that focus on lab technologies. For example, Indiana University-Purdue University at Indianapolis offers a graduate program devoted to Laboratory Informatics. Also, the Keck Graduate Institute in California offers a graduate degree with an emphasis on development of assays, instrumentation and data analysis tools required for clinical diagnostics, high-throughput screening, genotyping, microarray technologies, proteomics, imaging and other applications.
History
At least since 1875 there have been reports of automated devices for scientific investigation. These first devices were mostly built by scientists themselves in order to solve problems in the laboratory. After the second world war, companies started to provide automated equipment with greater and greater complexity.
Automation steadily spread in laboratories through the 20th century, but then a revolution took place: in the early 1980s, the first fully automated laboratory was opened by Dr. Masahide Sasaki. In 1993, Dr. Rod Markin at the University of Nebraska Medical Center created one of the world's first clinical automated laboratory management systems. In the mid-1990s, he chaired a standards group called the Clinical Testing Automation Standards Steering Committee (CTASSC) of the American Association for Clinical Chemistry, which later evolved into an area committee of the Clinical and Laboratory Standards Institute. In 2004, the National Institutes of Health (NIH) and more than 300 nationally recognized leaders in academia, industry, government, and the public completed the NIH Roadmap to accelerate medical discovery to improve health. The NIH Roadmap clearly identifies technology development as a mission critical factor in the Molecular Libraries and Imaging Implementation Group (see the first theme – New Pathways to Discovery – at https://web.archive.org/web/20100611171315/http://nihroadmap.nih.gov/).
Despite the success of Dr. Sasaki laboratory and others of the kind, the multi-million dollar cost of such laboratories has prevented adoption by smaller groups. This is all more difficult because devices made by different manufactures often cannot communicate with each other. However, recent advances based on the use of scripting languages like Autoit have made possible the integration of equipment from different manufacturers. Using this approach, many low-cost electronic devices, including open-source devices, become compatible with common laboratory instruments.
Some startups such as Emerald Cloud Lab and Strateos provide on-demand and remote laboratory access on a commercial scale. A 2017 study indicates that these commercial-scale, fully integrated automated laboratories can improve reproducibility and transparency in basic biomedical experiments, and that over nine in ten biomedical papers use methods currently available through these groups.
Low-cost laboratory automation
A large obstacle to the implementation of automation in laboratories has been its high cost. Many laboratory instruments are very expensive. This is justifiable in many cases, as such equipment can perform very specific tasks employing cutting-edge technology. However, there are devices employed in the laboratory that are not highly technological but still are very expensive. This is the case of many automated devices, which perform tasks that could easily be done by simple and low-cost devices like simple robotic arms, universal (open-source) electronic modules, Lego Mindstorms, or 3D printers.
So far, using such low-cost devices together with laboratory equipment was considered to be very difficult. However, it has been demonstrated that such low-cost devices can substitute without problems the standard machines used in laboratory. It can be anticipated that more laboratories will take advantage of this new reality as low-cost automation is very attractive for laboratories.
A technology that enables the integration of any machine regardless of their brand is scripting, more specifically, scripting involving the control of mouse clicks and keyboard entries, like AutoIt. By timing clicks and keyboard inputs, different software interfaces controlling different devices can be perfectly synchronized.
References
Further reading
Laboratory techniques
Laboratory equipment
Robotics
Robotics engineering | Laboratory automation | [
"Chemistry",
"Technology",
"Engineering"
] | 1,001 | [
"Computer engineering",
"Robotics engineering",
"Automation",
"Robotics",
"nan",
"Laboratory automation"
] |
5,671,185 | https://en.wikipedia.org/wiki/National%20School%20of%20Glass | The National School of Glass in Orrefors (Swedish: Riksglasskolan) is an educational center focused on glass arts, design and entrepreneurship in the field of glass. It was located next to the Orrefors glassworks in the Orrefors with the same name, at the center of what is known as the Kingdom of Crystal in Småland in Southern Sweden. The glassworks in Orrefors closed in 2012. The school then moved to Pukeberg in Nybro, which has become one of the main remaining glassworks centres in Sweden.
While primarily focused on Swedish and Nordic students, the school also welcomes international students to its programs.
History
Around 1960, Orrefors glassworks started a formalized glass education. The glass-related, practical parts were taught in the factory by special personnel while the theoretical subjects were taught one day a week at Nybro Vocational school.
In 1969, the municipality of Nybro took over all responsibility for the school of glass. Until 1979, the school of glass was housed in the buildings of the Orrefors glassworks. In the fall of 1979, the municipality of Nybro inaugurated the new premises of the National School of Glass near the Orrefors Glassworks.
References
External links
The National School of Glass in Orrefors
Orrefors Glass Student's Home
Glassmaking schools
Schools in Sweden
Design schools | National School of Glass | [
"Materials_science",
"Engineering"
] | 278 | [
"Glass engineering and science",
"Glassmaking schools"
] |
5,672,534 | https://en.wikipedia.org/wiki/Turboexpander | A turboexpander, also referred to as a turbo-expander or an expansion turbine, is a centrifugal or axial-flow turbine, through which a high-pressure gas is expanded to produce work that is often used to drive a compressor or generator.
Because work is extracted from the expanding high-pressure gas, the expansion is approximated by an isentropic process (i.e., a constant-entropy process), and the low-pressure exhaust gas from the turbine is at a very low temperature, −150 °C or less, depending upon the operating pressure and gas properties. Partial liquefaction of the expanded gas is not uncommon.
Turboexpanders are widely used as sources of refrigeration in industrial processes such as the extraction of ethane and natural-gas liquids (NGLs) from natural gas, the liquefaction of gases (such as oxygen, nitrogen, helium, argon and krypton) and other low-temperature processes.
Turboexpanders currently in operation range in size from about 750 W to about 7.5 MW (1 hp to about 10,000 hp).
Applications
Although turboexpanders are commonly used in low-temperature processes, they are used in many other applications. This section discusses one of the low-temperature processes, as well as some of the other applications.
Extracting hydrocarbon liquids from natural gas
Raw natural gas consists primarily of methane (CH4), the shortest and lightest hydrocarbon molecule, along with various amounts of heavier hydrocarbon gases such as ethane (C2H6), propane (C3H8), normal butane (n-C4H10), isobutane (i-C4H10), pentanes and even higher-molecular-mass hydrocarbons. The raw gas also contains various amounts of acid gases such as carbon dioxide (CO2), hydrogen sulfide (H2S) and mercaptans such as methanethiol (CH3SH) and ethanethiol (C2H5SH).
When processed into finished by-products (see Natural-gas processing), these heavier hydrocarbons are collectively referred to as NGL (natural-gas liquids). The extraction of the NGL often involves a turboexpander and a low-temperature distillation column (called a demethanizer) as shown in the figure. The inlet gas to the demethanizer is first cooled to about −51 °C in a heat exchanger (referred to as a cold box), which partially condenses the inlet gas. The resultant gas–liquid mixture is then separated into a gas stream and a liquid stream.
The liquid stream from the gas–liquid separator flows through a valve and undergoes a throttling expansion from an absolute pressure of 62 bar to 21 bar (6.2 to 2.1 MPa), which is an isenthalpic process (i.e., a constant-enthalpy process) that results in lowering the temperature of the stream from about −51 °C to about −81 °C as the stream enters the demethanizer.
The gas stream from the gas–liquid separator enters the turboexpander, where it undergoes an isentropic expansion from an absolute pressure of 62 bar to 21 bar (6.2 to 2.1 MPa) that lowers the gas stream temperature from about −51 °C to about −91 °C as it enters the demethanizer to serve as distillation reflux.
Liquid from the top tray of the demethanizer (at about −90 °C) is routed through the cold box, where it is warmed to about 0 °C as it cools the inlet gas, and is then returned to the lower section of the demethanizer. Another liquid stream from the lower section of the demethanizer (at about 2 °C) is routed through the cold box and returned to the demethanizer at about 12 °C. In effect, the inlet gas provides the heat required to "reboil" the bottom of the demethanizer, and the turboexpander removes the heat required to provide reflux in the top of the demethanizer.
The overhead gas product from the demethanizer at about −90 °C is processed natural gas that is of suitable quality for distribution to end-use consumers by pipeline. It is routed through the cold box, where it is warmed as it cools the inlet gas. It is then compressed in the gas compressor driven by the turboexpander and further compressed in a second-stage gas compressor driven by an electric motor before entering the distribution pipeline.
The bottom product from the demethanizer is also warmed in the cold box, as it cools the inlet gas, before it leaves the system as NGL.
The operating conditions of an offshore gas conditioning turbo-expander/recompressor are as follows:
Power generation
The figure depicts an electric power-generation system that uses a heat source, a cooling medium (air, water or other), a circulating working fluid and a turboexpander. The system can accommodate a wide variety of heat sources such as:
geothermal hot water,
exhaust gas from internal combustion engines burning a variety of fuels (natural gas, landfill gas, diesel oil, or fuel oil),
a variety of waste heat sources (in the form of either gas or liquid).
The circulating working fluid (usually an organic compound such as R-134a) is pumped to a high pressure and then vaporized in the evaporator by heat exchange with the available heat source. The resulting high-pressure vapor flows to the turboexpander, where it undergoes an isentropic expansion and exits as a vapor–liquid mixture, which is then condensed into a liquid by heat exchange with the available cooling medium. The condensed liquid is pumped back to the evaporator to complete the cycle.
The system in the figure implements a Rankine cycle as it is used in fossil-fuel power plants, where water is the working fluid and the heat source is derived from the combustion of natural gas, fuel oil or coal used to generate high-pressure steam. The high-pressure steam then undergoes an isentropic expansion in a conventional steam turbine. The steam turbine exhaust steam is next condensed into liquid water, which is then pumped back to steam generator to complete the cycle.
When an organic working fluid such as R-134a is used in the Rankine cycle, the cycle is sometimes referred to as an organic Rankine cycle (ORC).
Refrigeration system
A refrigeration system utilizes a compressor, a turboexpander and an electric motor.
Depending on the operating conditions, the turboexpander reduces the load on the electric motor by 6–15% compared to a conventional vapor-compression refrigeration system that uses a throttling expansion valve rather than a turboexpander. Basically, this can be seen as a form of turbo compounding.
The system employs a high-pressure refrigerant (i.e., one with a low normal boiling point) such as:
chlorodifluoromethane (CHClF2) known as R-22, with a normal boiling point of −47 °C;
1,1,1,2-tetrafluoroethane (C2H2F4) known as R-134a, with a normal boiling point of −26 °C.
As shown in the figure, refrigerant vapor is compressed to a higher pressure, resulting in a higher temperature as well. The hot, compressed vapor is then condensed into a liquid. The condenser is where heat is expelled from the circulating refrigerant and is carried away by whatever cooling medium is used in the condenser (air, water, etc.).
The refrigerant liquid flows through the turboexpander, where it is vaporized, and the vapor undergoes an isentropic expansion, which results in a low-temperature mixture of vapor and liquid. The vapor–liquid mixture is then routed through the evaporator, where it is vaporized by heat absorbed from the space being cooled. The vaporized refrigerant flows to the compressor inlet to complete the cycle.
In the case where the working fluid remains gaseous into the heat exchangers without undergoing phase changes, this cycle is also referred to as reverse Brayton cycle or "refrigerating Brayton cycle".
Power recovery in fluid catalytic cracker
The combustion flue gas from the catalyst regenerator of a fluid catalytic cracker is at a temperature of about 715 °C and at a pressure of about 2.4 barg (240 kPa gauge). Its gaseous components are mostly carbon monoxide (CO), carbon dioxide (CO2) and nitrogen (N2). Although the flue gas has been through two stages of cyclones (located within the regenerator) to remove entrained catalyst fines, it still contains some residual catalyst fines.
The figure depicts how power is recovered and utilized by routing the regenerator flue gas through a turboexpander. After the flue gas exits the regenerator, it is routed through a secondary catalyst separator containing swirl tubes designed to remove 70–90% of the residual catalyst fines. This is required to prevent erosion damage to the turboexpander.
As shown in the figure, expansion of the flue gas through a turboexpander provides sufficient power to drive the regenerator's combustion air compressor. The electrical motor-generator in the power-recovery system can consume or produce electrical power. If the expansion of the flue gas does not provide enough power to drive the air compressor, the electric motor-generator provides the needed additional power. If the flue gas expansion provides more power than needed to drive the air compressor, then the electric motor-generator converts the excess power into electric power and exports it to the refinery's electrical system. The steam turbine is used to drive the regenerator's combustion air compressor during start-ups of the fluid catalytic cracker until there is sufficient combustion flue gas to take over that task.
The expanded flue gas is then routed through a steam-generating boiler (referred to as a CO boiler), where the carbon monoxide in the flue gas is burned as fuel to provide steam for use in the refinery.
The flue gas from the CO boiler is processed through an electrostatic precipitator (ESP) to remove residual particulate matter. The ESP removes particulates in the size range of 2 to 20 micrometers from the flue gas.
History
The possible use of an expansion machine for isentropically creating low temperatures was suggested by Carl Wilhelm Siemens (Siemens cycle), a German engineer in 1857. About three decades later, in 1885, Ernest Solvay of Belgium attempted to use a reciprocating expander machine, but could not attain any temperatures lower than −98 °C because of problems with lubrication of the machine at such temperatures.
In 1902, Georges Claude, a French engineer, successfully used a reciprocating expansion machine to liquefy air. He used a degreased, burnt leather packing as a piston seal without any lubrication. With an air pressure of only 40 bar (4 MPa), Claude achieved an almost isentropic expansion resulting in a lower temperature than had before been possible.
The first turboexpanders seem to have been designed in about 1934 or 1935 by Guido Zerkowitz, an Italian engineer working for the German firm of Linde AG.
In 1939, the Russian physicist Pyotr Kapitsa perfected the design of centrifugal turboexpanders. His first practical prototype was made of Monel metal, had an outside diameter of only 8 cm (3.1 in), operated at 40,000 revolutions per minute and expanded 1,000 cubic metres of air per hour. It used a water pump as a brake and had an efficiency of 79–83%. Most turboexpanders in industrial use since then have been based on Kapitsa's design, and centrifugal turboexpanders have taken over almost 100% of the industrial gas liquefaction and low-temperature process requirements. The availability of liquid oxygen revolutionized the production of steel using the basic oxygen steelmaking process.
In 1978, Pyotr Kapitsa was awarded a Nobel physics prize for his body of work in the area of low-temperature physics.
In 1983, San Diego Gas and Electric was among the first to install a turboexpander in a natural-gas letdown station for energy recovery.
Types
Turboexpanders can be classified by loading device or bearings.
Three main loading devices used in turboexpanders are centrifugal compressors, electrical generators or hydraulic brakes. With centrifugal compressors and electrical generators the shaft power from the turboexpander is recouped either to recompress the process gas or to generate electrical energy, lowering utility bills.
Hydraulic brakes are used when the turboexpander is very small and harvesting the shaft power is not economically justifiable.
Bearings used are either oil bearings or magnetic bearings.
See also
Air separation
Dry gas seal
Flash evaporation
Gas compressor
Joule-Thomson effect
Liquefaction of gases
Rankine cycle
Steam turbine
Vapor-compression refrigeration
Hydrogen turboexpander-generator
References
External links
Use of Expansion Turbines in Natural Gas Pressure Reduction Stations
Turbo Lab’s Turbomachinery & Pump Symposia
Mechanical engineering
Turbines
Industrial gases
Gas technologies
Turbo generators | Turboexpander | [
"Physics",
"Chemistry",
"Engineering"
] | 2,820 | [
"Applied and interdisciplinary physics",
"Turbomachinery",
"Turbines",
"Industrial gases",
"Mechanical engineering",
"Chemical process engineering"
] |
1,612,114 | https://en.wikipedia.org/wiki/Reinhold%20Baer | Reinhold Baer (22 July 1902 – 22 October 1979) was a German mathematician, known for his work in algebra. He introduced injective modules in 1940. He is the eponym of Baer rings, Baer groups, and Baer subplanes.
Biography
Baer studied mechanical engineering for a year at Leibniz University Hannover. He then went to study philosophy at Freiburg in 1921. While he was at Göttingen in 1922 he was influenced by Emmy Noether and Hellmuth Kneser. In 1924 he won a scholarship for specially gifted students. Baer wrote up his doctoral dissertation and it was published in Crelle's Journal in 1927.
Baer accepted a post at Halle in 1928. There, he published Ernst Steinitz's "Algebraische Theorie der Körper" with Helmut Hasse, first published in Crelle's Journal in 1910.
While Baer was with his wife in Austria, Adolf Hitler and the Nazis came into power. Both of Baer's parents were Jewish, and he was for this reason informed that his services at Halle were no longer required. Louis Mordell invited him to go to Manchester and Baer accepted.
Baer stayed at Princeton University and was a visiting scholar at the nearby Institute for Advanced Study from 1935 to 1937. For a short while he lived in North Carolina. From 1938 to 1956 he worked at the University of Illinois at Urbana-Champaign. He returned to Germany in 1956.
According to biographer K. W. Gruenberg,
The rapid development of lattice theory in the mid-thirties suggested that projective geometry should be viewed as a special kind of lattice, the lattice of all subspaces of a vector space... [Linear Algebra and Projective Geometry (1952)] is an account of the representation of vector spaces over division rings, of projectivities by semi-linear transformations and of dualities by semi-bilinear forms.
He died of heart failure on 22 October in 1979.
In 2016 the Reinhold Baer Prize for the best Ph.D. thesis in group theory was set up in his honour.
Bibliography
1934: "Erweiterung von Gruppen und ihren Isomorphismen", Mathematische Zeitschrift 38(1): 375–416 (German)
1940: "Nilpotent groups and their generalizations", Transactions of the American Mathematical Society 47: 393–434
1944: "The higher commutator subgroups of a group", Bulletin of the American Mathematical Society 50: 143–160
1945: "Representations of groups as quotient groups. II. Minimal central chains of a group", Transactions of the American Mathematical Society 58: 348–389
1945: "Representations of groups as quotient groups. III. Invariants of classes of related representations", Transactions of the American Mathematical Society 58: 390–419
See also
Capable group
Dedekind group
Retract (group theory)
Radical of a ring
Semiprime ring
Nielsen-Schreier theorem
References
O. H. Kegel (1979) "Reinhold Baer (1902 — 1979)", Mathematical Intelligencer 2:181,2.
External links
K.W. Gruenberg & Derek Robinson (2003) The Mathematical Legacy of Reinhold Baer, Illinois Journal of Mathematics'' 47(1-2) from Project Euclid.
Author profile in the database zbMATH
Baer Family's Schedule of 1940 US Census.
Reproduction of a talk given by Baer on his last lecture in 1967, before his retirement from the University of Frankfurt - here is a translation.
1902 births
1979 deaths
Scientists from Berlin
Jewish emigrants from Nazi Germany to the United States
20th-century German mathematicians
Algebraists
University of Freiburg alumni
University of Göttingen alumni
Academic staff of the Martin Luther University of Halle-Wittenberg
Princeton University faculty
Institute for Advanced Study visiting scholars
University of Illinois Urbana-Champaign faculty
Academic staff of Goethe University Frankfurt | Reinhold Baer | [
"Mathematics"
] | 810 | [
"Algebra",
"Algebraists"
] |
1,612,567 | https://en.wikipedia.org/wiki/Modal%20testing | Modal testing is the form of vibration testing of an object whereby the natural (modal) frequencies, modal masses, modal damping ratios and mode shapes of the object under test are determined.
A modal test consists of an acquisition phase and an analysis phase. The complete process is often referred to as a Modal Analysis or Experimental Modal Analysis.
There are several ways to do modal testing but impact hammer testing and shaker (vibration tester) testing are commonplace. In both cases energy is supplied to the system with a known frequency content. Where structural resonances occur there will be an amplification of the response, clearly seen in the response spectra. Using the response spectra and force spectra, a transfer function can be obtained. The transfer function (or frequency response function (FRF)) is often curve fitted to estimate the modal parameters; however, there are many methods of modal parameter estimation and it is the topic of much research.
Impact Hammer Modal Testing
An ideal impact to a structure is a perfect impulse, which has an infinitely small duration, causing a constant amplitude in the frequency domain; this would result in all modes of vibration being excited with equal energy. The impact hammer test is designed to replicate this; however, in reality a hammer strike cannot last for an infinitely small duration, but has a known contact time. The duration of the contact time directly influences the frequency content of the force, with a larger contact time causing a smaller range of bandwidth. A load cell is attached to the end of the hammer to record the force. Impact hammer testing is ideal for small lightweight structures. However, as the size of the structure increases, issues can occur due to a poor signal-to-noise ratio, which is common on large civil engineering structures.
Shaker Modal Testing
A shaker is a device that excites the object or structure according to its amplified input signal. Several input signals are available for modal testing, but the sine sweep and random frequency vibration profiles are by far the most commonly used signals.
Small objects or structures can be attached directly to the shaker table. With some types of shakers, an armature is often attached to the body to be tested by way of piano wire (pulling force) or stinger (pushing force). When the signal is transmitted through the piano wire or the stinger, the object responds the same way as impact testing, by attenuating some and amplifying certain frequencies. These frequencies are measured as modal frequencies. Usually a load cell is placed between the shaker and the structure to obtain the excitation force.
For large civil engineering structures much larger shakers are used, which can have a mass of 100 kg and above, and are able to apply a force of many hundreds of newtons. Several types of shakers are common: rotating mass shakers, electrodynamic shakers, and electrohydraulic shakers. For rotating mass shakers, the force can be calculated by knowing the mass and the speed of rotation, while for electrodynamic shakers, the force can be obtained through a load cell or an accelerometer placed on the moving mass of the shaker. Shakers have an advantage over the impact hammer as they can supply more energy to a structure over a longer period of time. However, problems can also be introduced; shakers can influence the dynamic properties of the structure and can also increase the complexity of analysis due to windowing errors.
See also
Modal Analysis
Vibration
Cushioning
Shock absorber
Shock (mechanics)
Shock response spectrum
Shaker (testing device)
References
Wave mechanics
Tests
de:Modalanalyse | Modal testing | [
"Physics"
] | 745 | [
"Wave mechanics",
"Waves",
"Physical phenomena",
"Classical mechanics"
] |
1,612,722 | https://en.wikipedia.org/wiki/Index%20of%20genetics%20articles | Genetics (from Ancient Greek , “genite” and that from , “origin”), a discipline of biology, is the science of heredity and variation in living organisms.
Articles (arranged alphabetically) related to genetics include:
#
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
References
See also
List of genetics research organizations
List of geneticists & biochemists
Articles
Genetics-related topics
Biotechnology | Index of genetics articles | [
"Biology"
] | 98 | [
"nan",
"Biotechnology"
] |
1,614,044 | https://en.wikipedia.org/wiki/Pennington%20clamp | A Pennington clamp, also known as a Duval clamp, is a surgical clamp with a triangular eyelet. Used for grasping tissue, particularly during intestinal and rectal operations. Also used in some OB/GYN procedures, particularly caesarian section. Under the name 'Duval clamp' they are occasionally used much like a Foerster clamp to atraumatically grasp lung tissue. The clamp is named after David Geoffrey Pennington, an Australian surgeon who is a pioneer of microsurgeries.
Non-medical uses
It is commonly used in body piercing to hold the skin in place, and guide the needle through it.
Variants
In addition to the shape of the gripping head, a distinction is also made between forceps with open and closed jaws. Open forceps can be removed directly after the stitch without having to shorten the intravenous cannula or piercing needle with scissors beforehand, but offer a slightly less secure hold than closed forceps. The clamps usually have small hooks on the handle side that interlock when closed and can therefore hold the clamp closed.
Pennington clamp
A Pennington clamp, also known as a Duval clamp, has a gripping head in the shape of a triangle. The end is straight. The pliers can therefore be placed flat against the body part to be pierced and are therefore particularly suitable for piercing surface piercings, but are also frequently used for gripping free-standing body parts, for example when piercing a lobe piercing, a lip frenulum piercing or various intimate piercings.
See also
Foerster clamp
Instruments used in general surgery
References
Medical clamps
Body piercing
Surgical instruments
Body piercing process
Medical devices
Australian inventions
Medical equipment stubs
Medical equipment | Pennington clamp | [
"Biology"
] | 351 | [
"Medical devices",
"Medical equipment",
"Medical technology"
] |
1,614,121 | https://en.wikipedia.org/wiki/Barotropic%20fluid | In fluid dynamics, a barotropic fluid is a fluid whose density is a function of pressure only. The barotropic fluid is a useful model of fluid behavior in a wide variety of scientific fields, from meteorology to astrophysics.
The density of most liquids is nearly constant (isopycnic), so it can be stated that their densities vary only weakly with pressure and temperature. Water, which varies only a few percent with temperature and salinity, may be approximated as barotropic. In general, air is not barotropic, as it is a function of temperature and pressure; but, under certain circumstances, the barotropic assumption can be useful.
In astrophysics, barotropic fluids are important in the study of stellar interiors or of the interstellar medium. One common class of barotropic model used in astrophysics is a polytropic fluid. Typically, the barotropic assumption is not very realistic.
In meteorology, a barotropic atmosphere is one that for which the density of the air depends only on pressure, as a result isobaric surfaces (constant-pressure surfaces) are also constant-density surfaces. Such isobaric surfaces will also be isothermal surfaces, hence (from the thermal wind equation) the geostrophic wind will not vary with depth. Hence, the motions of a rotating barotropic air mass is strongly constrained. The tropics are more nearly barotropic than mid-latitudes because temperature is more nearly horizontally uniform in the tropics.
A barotropic flow is a generalization of a barotropic atmosphere. It is a flow in which the pressure is a function of the density only and vice versa. In other words, it is a flow in which isobaric surfaces are isopycnic surfaces and vice versa. One may have a barotropic flow of a non-barotropic fluid, but a barotropic fluid will always follow a barotropic flow. Examples include barotropic layers of the oceans, an isothermal ideal gas or an isentropic ideal gas.
A fluid which is not barotropic is baroclinic, i. e., pressure is not the only factor to determine density. For a barotropic fluid or a barotropic flow (such as a barotropic atmosphere), the baroclinic vector is zero.
See also
Atmospheric dynamics
References
James R Holton, An introduction to dynamic meteorology, , 3rd edition, p77.
Marcel Lesieur, "Turbulence in Fluids: Stochastic and Numerical Modeling", , 2e.
David Tritton, "Physical Fluid Dynamics", .
Fluid dynamics
Atmospheric dynamics | Barotropic fluid | [
"Chemistry",
"Engineering"
] | 574 | [
"Piping",
"Chemical engineering",
"Atmospheric dynamics",
"Fluid dynamics"
] |
1,614,482 | https://en.wikipedia.org/wiki/Metropolitan-Vickers | Metropolitan-Vickers, Metrovick, or Metrovicks, was a British heavy electrical engineering company of the early-to-mid 20th century formerly known as British Westinghouse. Highly diversified, it was particularly well known for its industrial electrical equipment such as generators, steam turbines, switchgear, transformers, electronics and railway traction equipment. Metrovick holds a place in history as the builders of the first commercial transistor computer, the Metrovick 950, and the first British axial-flow jet engine, the Metropolitan-Vickers F.2. Its factory in Trafford Park, Manchester, was for most of the 20th century one of the biggest and most important heavy engineering facilities in Britain and the world.
History
Metrovick started as a way to separate the existing British Westinghouse Electrical and Manufacturing Company factories from United States control, which had proven to be a hindrance to gaining government contracts during the First World War. In 1917 a holding company was formed to try to find financing to buy the company's properties.
In May 1917, control of the holding company was obtained jointly by the Metropolitan Carriage, Wagon and Finance Company, of Birmingham, chaired by Dudley Docker, and Vickers Limited, of Barrow-in-Furness.<ref name=gillham-2>Gillham (1988), Chapter 2: The Manufacturers.</ref> On 15 March 1919, Docker agreed terms with Vickers, for Vickers to purchase all the shares of the Metropolitan Carriage, Wagon and Finance Company for almost £13 million. On 8 September 1919, Vickers changed the name of the British Westinghouse Electrical and Manufacturing Company to Metropolitan Vickers Electrical Company.
The immediate post-war era was marked by low investment and continued labour unrest. Fortunes changed in 1926 with the formation of the Central Electricity Board which standardised electrical supply and led to a massive expansion of electrical distribution, installations, and appliance purchases. Sales shot up, and 1927 marked the company's best year to date.
On 15 November 1922 the BBC was registered and the BBC's Manchester station, 2ZY, was officially opened on 375 metres transmitting from the Metropolitan Vickers Electricity works in Old Trafford.
In 1921, they bought a site at Attercliffe Common in Sheffield, which was used to manufacture traction motors. By 1923, it had its own engineering department, and was making complete locomotives and electric delivery vehicles.
BTH merger and transition to AEI
In 1928 Metrovick merged with the rival British Thomson-Houston (BTH), a company of similar size and product lineup. Combined, they would be one of the few companies able to compete with Marconi or English Electric on an equal footing. In fact the merger was marked by poor communication and intense rivalry, and the two companies generally worked at cross purposes.
The next year the combined company was purchased by the Associated Electrical Industries (AEI) holding group, who also owned Edison Swan (Ediswan); and Ferguson, Pailin & Co, manufacturers of electrical switchgear in Openshaw, Manchester. The rivalry between Metrovick and BTH continued, and AEI was never able to exert effective control over the two competing subsidiary companies.
Problems worsened in 1929 with the start of the Great Depression, but Metrovick's overseas sales were able to pick up some of the slack, notably a major railway electrification project in Brazil. By 1933 world trade was growing again, but growth was nearly upset when six Metrovick engineers were arrested and found guilty of espionage and "wrecking" in Moscow after a number of turbines built by the company in and for the Soviet Union proved to be faulty. The British government intervened; the engineers were released and trade with Russia was resumed after a brief embargo.
During the 1930s Metropolitan Vickers produced two dozen very large diameter (3m/10 ft) three-phase AC traction motors for the Hungarian railway's V40 and V60 electric locomotives. The 1640 kW rated power machinery, designed by Kálmán Kandó, was paid for by British government economic aid.
In 1935 the company built a 105 MW steam turbogenerator, the largest in Europe at that time, for the Battersea Power Station.
In 1936 Metrovick started work with the Air Ministry on automatic pilot systems, eventually branching out to gunlaying systems and building radars the next year. In 1938 they reached an agreement with the Ministry to build a turboprop design developed at the Royal Aircraft Establishment (RAE) under the direction of Hayne Constant. It is somewhat ironic that BTH, its erstwhile partners, were at the same time working with Frank Whittle on his pioneering jet designs.
Wartime aircraft production
In mid-1938, MV was awarded a contract to build Avro Manchester twin-engined heavy bombers under licence from A.V. Roe. As this type of work was very different from its traditional heavy engineering activities, a new factory was built on the western side of Mosley Road and this was completed in stages through 1940. There were significant problems producing this aircraft, not least being the unreliability of the Rolls-Royce Vulture engine and that the first 13 Manchesters were destroyed in a Luftwaffe bombing raid on Trafford Park on 23 December. Despite this the firm went on to complete 43 examples. With the design of the much improved four-engined derivative, the Avro Lancaster, MV switched production to that famous type, supplied with Rolls-Royce Merlin engines from the Ford Trafford Park shadow factory. Three hangars were erected on the southside of Manchester's Ringway Airport for assembly and testing of its Lancasters, before a policy switch was made to assembling them in a hangar at Avro's Woodford airfield. By the end of the war, MV had built 1,080 Lancasters. These were followed by 79 Avro Lincoln derivatives before remaining orders were cancelled and MV's aircraft production ceased in December 1945.
In 1940 the turboprop effort was re-engineered as a pure jet engine after the successful run of Whittle's designs. The new design became the Metrovick F.2 and eventually flew in 1943 on a Gloster Meteor. Considered to be too complex to bother with, Metrovick then re-engineered the design once again to produce roughly double the power, while at the same time starting work on a much larger design, the Metrovick F.9 Sapphire. Although the F.9 proved to be a winner, the Ministry of Supply nevertheless forced the company to sell the jet division to Armstrong Siddeley in 1947 to reduce the number of companies in the business.
In addition to building aircraft, other wartime work included the manufacture of both Dowty and Messier undercarriages, automatic pilot units, searchlights and radar equipment. They also produced electric vans and lorries.
Metrovick postwar
The post-war era led to massive demand for electrical systems, leading to additional rivalries between Metrovick and BTH as each attempted to one-up the other in delivering ever-larger turbogenerator contracts. Metrovick also expanded its appliance division during this time, becoming a well known supplier of refrigerators and stoves.
The design and manufacture of sophisticated scientific instruments, such as electron microscopes, and mass spectrometers, became an important area of scientific research for the company.
In 1947, a Metrovick G.1 Gatric gas turbine was fitted to the Motor Gun Boat MGB 2009, making it the world's first gas turbine powered naval vessel. A subsequent marine gas turbine engine was the G.2 of 4,500 shp fitted to the Royal Navy Bold-class fast patrol boats Bold Pioneer and Bold Pathfinder'', which were built in 1953.
The Bluebird K7 jet-propelled 3-point hydroplane in which Donald Campbell broke the 200 mph water speed barrier was powered with a Metropolitan-Vickers Beryl jet engine producing of thrust. The K7 was unveiled in late 1954. Campbell succeeded on Ullswater on 23 July 1955, where he set a record of , beating the previous record by some held by Stanley Sayres.
Another major area of expansion was in the diesel locomotive market, where they combined their own generators and traction motors with third-party diesel engines to develop in 1950 the Western Australian Government Railways X class 2-Do-2 locomotive and in 1958 the type 2 Co-Bo, later re-classified under the TOPS system as the British Rail Class 28. This diesel-electric locomotive was unusual on two counts; its Co-Bo wheel arrangement and its Crossley 2-Stroke diesel engine (evolved from a World War II marine engine). Intended as part the British Railways Modernisation Plan, the twenty-strong fleet saw service between Scotland and England before being deemed unsuccessful and withdrawn in the late 1960s. Metrovick also produced the CIE 001 Class (originally 'A' Class) from 1955, the first production mainline diesels in Ireland.
Metropolitan Vickers also produced electrical equipment for the British Rail Class 76 (EM1), and British Rail Class 77 (EM2), 1.5 kV DC locomotives, built at Gorton Works for the electrification of the Woodhead Line in the early 1950s. Larger but broadly similar locomotives were also supplied to the New South Wales Government Railways as its 46 class. The company also designed the British Rail Class 82, 25 kV AC locomotives built by Beyer, Peacock & Company in Manchester using Metrovick electrical equipment. The company also supplied electrical equipment for the British Rail Class 303 electric multiple units.
In the 1950s, the company built a large power transformer works at Wythenshawe, Manchester. The factory opened in 1957, and was closed by GEC in 1971, after which it was sold to the American compressor manufacturer Ingersoll Rand.
In 1961, the Russian cosmonaut Yuri Gagarin was invited to the company's factory at Trafford Park as part of his tour of Manchester.
The rivalry between Metrovick and BTH was eventually ended in an unconvincing fashion when the AEI management eventually decided to rid themselves of both brands and be known as AEI universally, a change they made on 1 January 1960. This move was almost universally resented within both companies. Worse, the new brand name was utterly unknown to its customers, leading to a noticeable fall-off in sales and AEI's stock price.
General Electric Company (GEC) takeover
When AEI attempted to remove the doubled-up management structures, they found this task to be even more difficult. By the mid-1960s the company was struggling under the weight of two complete management hierarchies, and they appeared to be unable to control the company any more. This allowed AEI to be purchased by General Electric Company in 1967.
See also
:Category:Metropolitan-Vickers locomotives
Bowesfield Works
Metro-Vickers Affair
Metrovick electric vehicles
References
Bibliography
Further reading
External links
"Metropolitan-Vickers Electrical Co. Ltd. 1899-1949" by John Dummelow 250 pages of text and pictures. (This is a mirror of the original, which was accessed from https://web.archive.org/web/20050308065049/http://www.mvbook.org.uk/ )
Turbines
Locomotive manufacturers of the United Kingdom
Engineering companies of the United Kingdom
Electrical engineering companies of the United Kingdom
Defunct manufacturing companies of the United Kingdom
Associated Electrical Industries
Radar manufacturers
Defunct companies based in Manchester
Manufacturing companies based in Manchester
Manufacturing companies established in 1899
Manufacturing companies disestablished in 1960
Defunct aircraft engine manufacturers of the United Kingdom
Former defence companies of the United Kingdom
Science and technology in the United Kingdom
British companies established in 1899
British companies disestablished in 1960
1899 establishments in England
1960 disestablishments in England | Metropolitan-Vickers | [
"Chemistry"
] | 2,387 | [
"Turbines",
"Turbomachinery"
] |
1,614,609 | https://en.wikipedia.org/wiki/Polarization%20in%20astronomy | Polarization of electromagnetic radiation is a useful tool for detecting various astronomical phenomenon. For example, energy can become polarized by passing through interstellar dust or by magnetic fields. Microwave energy from the primordial universe can be used to study the physics of that environment.
Stars
The polarization of starlight was first observed by the astronomers William Hiltner and John S. Hall in 1949. Subsequently, Jesse Greenstein and Leverett Davis, Jr. developed theories allowing the use of polarization data to trace interstellar magnetic fields.
Though the integrated thermal radiation of stars is not usually appreciably polarized at source, scattering by interstellar dust can impose polarization on starlight over long distances. Net polarization at the source can occur if the photosphere itself is asymmetric, due to limb polarization. Plane polarization of starlight generated at the star itself is observed for Ap stars (peculiar A type stars).
Sun
Both circular and linear polarization of sunlight has been measured. Circular polarization is mainly due to transmission and absorption effects in strongly magnetic regions of the Sun's surface. Another mechanism that gives rise to circular polarization is the so-called "alignment-to-orientation mechanism". Continuum light is linearly polarized at different locations across the face of the Sun (limb polarization) though taken as a whole, this polarization cancels. Linear polarization in spectral lines is usually created by anisotropic scattering of photons on atoms and ions which can themselves be polarized by this interaction. The linearly polarized spectrum of the Sun is often called the second solar spectrum. Atomic polarization can be modified in weak magnetic fields by the Hanle effect. As a result, polarization of the scattered photons is also modified providing a diagnostics tool for understanding stellar magnetic fields.
Other sources
Polarization is also present in radiation from coherent astronomical sources due to the Zeeman effect (e.g. hydroxyl or methanol masers).
The large radio lobes in active galaxies and pulsar radio radiation (which may, it is speculated, sometimes be coherent) also show polarization.
Apart from providing information on sources of radiation and scattering, polarization also probes the interstellar magnetic field in our galaxy as well as in radio galaxies via Faraday rotation. In some cases it can be difficult to determine how much of the Faraday rotation is in the external source and how much is local to our own galaxy, but in many cases it is possible to find another distant source nearby in the sky; thus by comparing the candidate source and the reference source, the results can be untangled.
Cosmic microwave background
The polarization of the cosmic microwave background (CMB) is also being used to study the physics of the very early universe. CMB exhibits 2 components of polarization: B-mode (divergence-free like magnetic field) and E-mode (curl-free gradient-only like electric field) polarization. The BICEP2 telescope located at the South Pole initially claimed the detection of B-mode polarization in the CMB, though the initially claimed result was later retracted. The polarization modes of the CMB may provide more information about the influence of gravitational waves on the development of the early universe.
It has been suggested that astronomical sources of polarised light caused the chirality found in biological molecules on Earth.
See also
Chandrasekhar polarization
References
External links
Discovery by Hiltner and Hall, analysis by Greenstein
Concepts in astrophysics
Polarization (waves)
Articles containing video clips | Polarization in astronomy | [
"Physics"
] | 723 | [
"Polarization (waves)",
"Concepts in astrophysics",
"Astrophysics"
] |
1,615,196 | https://en.wikipedia.org/wiki/Energy%20level%20splitting | In quantum physics, energy level splitting or a split in an energy level of a quantum system occurs when a perturbation changes the system. The perturbation changes the corresponding Hamiltonian and the outcome is change in eigenvalues; several distinct energy levels emerge in place of the former degenerate (multi-state) level. This may occur because of external fields, quantum tunnelling between states, or other effects. The term is most commonly used in reference to the electron configuration in atoms or molecules.
The simplest case of level splitting is a quantum system with two states whose unperturbed Hamiltonian is a diagonal operator: , where is the identity matrix. Eigenstates and eigenvalues (energy levels) of a perturbed Hamiltonian
will be:
: the level, and
: the level,
so this degenerate eigenvalue splits in two whenever . Though, if a perturbed Hamiltonian is not diagonal for this quantum states basis , then Hamiltonian's eigenstates are linear combinations of these two states.
For a physical implementation such as a charged spin-½ particle in an external magnetic field, the z-axis of the coordinate system is required to be collinear with the magnetic field to obtain a Hamiltonian in the form above (the Pauli matrix corresponds to z-axis). These basis states, referred to as spin-up and spin-down, are hence eigenvectors of the perturbed Hamiltonian, so this level splitting is both easy to demonstrate mathematically and intuitively evident.
But in cases where the choice of state basis is not determined by a coordinate system, and the perturbed Hamiltonian is not diagonal, a level splitting may appear counter-intuitive, as in examples from chemistry below.
Examples
In atomic physics:
The Zeeman effect – the splitting of electronic levels in an atom because of an external magnetic field.
The Stark effect – splitting because of an external electric field.
In physical chemistry:
The Jahn–Teller effect – splitting of electronic levels in a molecule because breaking the symmetry lowers the energy when the degenerate orbitals are partially filled.
Resonance (chemistry) leads to creation of delocalized electron states. ()
Nitrogen inversion leads to level splitting in ammonia (), which is used in an ammonia maser. ()
References
Quantum mechanics | Energy level splitting | [
"Physics"
] | 478 | [
"Theoretical physics",
"Quantum mechanics"
] |
16,334,130 | https://en.wikipedia.org/wiki/On-die%20termination | On-die termination (ODT) is the technology where the termination resistor for impedance matching in transmission lines is located inside a semiconductor chip instead of on a printed circuit board (PCB).
Overview of electronic signal termination
In lower frequency (slow edge rate) applications, interconnection lines can be modelled as "lumped" circuits. In this case, there is no need to consider the concept of "termination". Under the low-frequency condition, every point in an interconnect wire can be assumed to have the same voltage as every other point for any instance in time.
However, if the propagation delay in a wire, PCB trace, cable, or connector is significant (for example, if the delay is greater than 1/6 of the rise time of the digital signal), the "lumped" circuit model is no longer valid and the interconnect has to be analyzed as a transmission line. In a transmission line, the signal interconnect path is modeled as a circuit containing distributed inductance, capacitance, and resistance throughout its length.
For a transmission line to minimize distortion of the signal, the impedance of every location on the transmission line should be uniform throughout its length. If there is any place in the line where the impedance is not uniform for some reason (open circuit, impedance discontinuity, different material) the signal gets modified by reflection at the impedance change point which results in distortion, ringing, and so forth.
When the signal path has impedance discontinuity, in other words, an impedance mismatch, then a termination impedance with the equivalent amount of impedance is placed at the point of line discontinuity. This is described as "termination". For example, resistors can be placed on computer motherboards to terminate high-speed busses. There are several ways of termination depending on how the resistors are connected to the transmission line. Parallel termination and series termination are examples of termination methodologies.
On-die termination
Instead of having the necessary resistive termination located on the motherboard, the termination is located inside the semiconductor chips–technique called On-Die Termination (abbreviated to ODT).
Why is on-die termination needed?
Although the termination resistors on the motherboard reduce some reflections on the signal lines, they are unable to prevent reflections resulting from the stub lines that connect to the components on the module card (e.g. DRAM module). A signal propagating from the controller to the components encounters an impedance discontinuity at the stub leading to the components on the module. The signal that propagates along the stub to the component (e.g. DRAM component) will be reflected onto the signal line, thereby introducing unwanted noise into the signal. In addition, on-die termination can reduce the number of resistor elements and complex wiring on the motherboard. Accordingly, the system design can be simpler and cost-effective.
Example of ODT: DRAM
On-die termination is implemented with several combinations of resistors on the DRAM silicon along with other circuit trees. DRAM circuit designers can use a combination of transistors that have different values of turn-on resistance. In the case of DDR2, there are three kinds of internal resistors 150ohm, 75ohm, and 50ohm. The resistors can be combined to create a proper equivalent impedance value to the outside of the chip, whereby the signal line (transmission line) of the motherboard is controlled by the on-die termination operation signal. Where an on-die termination value control circuit exists the DRAM controller manages the on-die termination resistance through a programmable configuration register that resides in the DRAM. The internal on-die termination values in DDR3 are 120ohm, 60ohm, 40ohm, and so forth.
How On-Die Termination (ODT) Works: An Example of DRAM
Utilizing On-Die Termination (ODT) involves two steps. First, the On-Die Termination (ODT) value must be selected within the DRAM. Second, it can be dynamically enabled/disabled using the ODT pin from the ODT Controller. To configure ODT there could be different methods. In DRAM, it is done by setting up the device’s extended mode register with the proper ODT value.
There are synchronous and asynchronous timing requirements, depending on the state of the DRAM device. Essentially, the On-Die Termination (ODT) is turned on just before the data transfer and then shut off immediately after. If there is more than one DRAM device loaded on the channel, either the active or inactive DRAM can terminate the signal. This flexibility enables optimal termination to occur as precisely as needed.
Let’s try to understand how On-Die Termination (ODT) works in DRAM read and write operations. All data-group signals fall under point-to-point singling. The data-group signals are driven by the DRAM controller on writes and driven by the DRAM memories during reads. No external resistors are needed on these routes on PCB as the DRAM controller and Memory are equipped with ODT. The receivers in both cases (DRAMS memory on writes and DRAM controller on reads) will assert on-die terminations (ODT) at the appropriate times. The following diagrams show the impedances seen on these nets during write and read cycles.
On-Die Termination (ODT) in Write Cycle
Let’s take an example of the impedances seen on the nets during a write cycle as per the below picture. During writes, the output impedance of the DRAM device is approximately 45Ω. It is recommended that the SDRAM be implemented with a 240Ω. Assuming the RZQ resistor is 240Ω, Termination resistors can be configured to present an On-Die Termination (ODT) of RZQ/4 for an effective termination of 40Ω.
On-die Termination (ODT) in Read Cycle
The picture shows the impedances seen on the PCB nets during a read cycle. During reads, it is recommended that the DRAM be configured for an effective drive impedance of RZQ/7 or 34 Ω (assuming the RZQ resistor is 240 Ω). The on-die termination (ODT) within the DRAM controller will have an effective Thevenin impedance of 45 Ω.
Fly-By Signals
Now let’s talk about the fly-by signals, which include the address, control, command, and clock routing groups. The fly-by signals consist of the fly-by routing from the DRAM controller, stubs at each SDRAM, and terminations after the last SDRAM. In this example, address, control, and command groups will be terminated through a 39.2-2 resistor to VTT.
The clock pairs will be terminated through 39.2Ω resistors to a common node connected to a capacitor that is then connected to VDDQ. The DRAM controller will present a 45-2 output impedance when driving these signals.
See also
Reflections of signals on conducting lines
References
Semiconductors | On-die termination | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,497 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
16,337,341 | https://en.wikipedia.org/wiki/Injection%20well | An injection well is a device that places fluid deep underground into porous rock formations, such as sandstone or limestone, or into or below the shallow soil layer. The fluid may be water, wastewater, brine (salt water), or water mixed with industrial chemical waste.
Definition
The U.S. Environmental Protection Agency (EPA) defines an injection well as "a bored, drilled, or driven shaft, or a dug hole that is deeper than it is wide, or an improved sinkhole, or a subsurface fluid distribution system". Well construction depends on the injection fluid injected and depth of the injection zone. Deep wells that are designed to inject hazardous wastes or carbon dioxide deep below the Earth's surface have multiple layers of protective casing and cement, whereas shallow wells injecting non-hazardous fluids into or above drinking water sources are more simply constructed.
Applications
Injection wells are used for many purposes.
Waste disposal
Treated wastewater can be injected into the ground between impermeable layers of rocks to avoid polluting surface waters. Injection wells are usually constructed of solid walled pipe to a deep elevation in order to prevent injectate from mixing with the surrounding environment. Injection wells utilize the earth as a filter to treat the wastewater before it reaches the aquifer. This method of wastewater disposal also serves to spread the injectate over a wide area, further decreasing environmental impacts.
In the United States, there are about 800 deep injection waste disposal wells used by industries such as chemical manufacturers, petroleum refineries, food producers and municipal wastewater plants. Most produced water generated by oil and gas extraction wells in the US is also disposed in deep injection wells.
Critics of wastewater injection wells cite concerns about potential groundwater contamination. It is argued that the impacts of some injected wastes in groundwater is not fully understood, and that the science and regulatory agencies have not kept up with the rapid expansion of disposal practices in US, where there are over 680,000 wells as of 2012.
Alternatives to injection wells include direct discharge of treated wastewater to receiving waters, conditioning of oil drilling and fracking produced water for reuse, utilization of treated water for irrigation or livestock watering, or processing of water at industrial wastewater treatment plants. Direct discharge does not disperse the water over a wide area; the environmental impact is focused on a particular segment of a river and its downstream reaches or on a coastal water body. Extensive irrigation is not typical in areas where the produced water tends to be salty, and this practice is often prohibitively expensive and requires ongoing maintenance and large electricity usage.
Since the early 1990s, Maui County, Hawaii has been engaged in a struggle over the 3 to 5 million gallons per day of wastewater that it injects below the Lahaina Wastewater Reclamation Facility, over the claim that the water was emerging in seeps that were causing algae blooms and other environmental damage. After some twenty years, it was sued by environmental groups after multiple studies showed that more than half the injectate was appearing in nearby coastal waters. The judge in the suit rejected the County's arguments, potentially subjecting it to millions of dollars in federal fines. A 2001 consent decree required the county to obtain a water quality certification from the Hawaii Department of Health, which it failed to do until 2010, after the suit was filed. The case proceeded through the United States Court of Appeals for the Ninth Circuit and subsequently to the Supreme Court of the United States. In 2020 the Court ruled in County of Maui v. Hawaii Wildlife Fund that injection wells may be the "functional equivalent of a direct discharge" under the Clean Water Act, and instructed the EPA to work with the courts to establish regulations when these types of wells should require permits.
Oil and gas production
Another use of injection wells is in natural gas and petroleum production. Steam, carbon dioxide, water, and other substances can be injected into an oil-producing unit in order to maintain reservoir pressure, heat the oil or lower its viscosity, allowing it to flow to a producing well nearby.
Waste site remediation
Yet another use for injection wells is in environmental remediation, for cleanup of either soil or groundwater contamination. Injection wells can insert clean water into an aquifer, thereby changing the direction and speed of groundwater flow, perhaps towards extraction wells downgradient, which could then more speedily and efficiently remove the contaminated groundwater. Injection wells can also be used in cleanup of soil contamination, for example by use of an ozonation system. Complex hydrocarbons and other contaminants trapped in soil and otherwise inaccessible can be broken down by ozone, a highly reactive gas, often with greater cost-effectiveness than could be had by digging out the affected area. Such systems are particularly useful in built-up urban environments where digging may be impractical due to overlying buildings.
Aquifer recharge
Recently the option of refilling natural aquifers with injection or percolation has become more important, particularly in the driest region of the world, the MENA region (Middle East and North Africa).
Surface runoff can also be recharged into dry wells, or simply barren wells that have been modified to functions as cisterns. These hybrid stormwater management systems, called recharge wells, have the advantage of aquifer recharge and instantaneous supply of potable water at the same time. They can utilize existing infrastructure and require very little effort for the modification and operation. The activation can be as simple as inserting a polymer cover (foil) into the well shaft. Vertical pipes for conduction of the overflow to the bottom can enhance performance. The area around the well acts as funnel. If this area is maintained well the water will require little purification before it enters the cistern.
Geothermal energy
Injection wells are used to tap geothermal energy in hot, porous rock formations below the surface by injecting fluids into the ground, which is heated in the ground, then extracted from adjacent wells as fluid, steam, or a combination of both. The heated steam and fluid can then be utilized to generate electricity or directly for geothermal heating.
Regulatory requirements
In the United States, injection well activity is regulated by EPA and state governments under the Safe Drinking Water Act (SDWA). The “State primary enforcement responsibility” section of the SDWA provides for States to submit their proposed UIC program to the EPA to request State assumption of primary enforcement responsibility. Thirty-four states have been granted UIC primacy enforcement authority for Class I, II, III, IV and V wells. For states without an approved UIC program, the EPA administrator prescribes a program to apply. EPA has issued Underground Injection Control (UIC) regulations in order to protect drinking water sources.
EPA regulations define six classes of injection wells. Class I wells are used for the injection of municipal and industrial wastes beneath underground sources of drinking water. Class II wells are used for the injection of fluids associated with oil and gas production, including waste from hydraulic fracturing. Class III wells are used for the injection of fluids used in mineral solution mining beneath underground sources of drinking water. Class IV wells, like Class I wells, were used for the injection of hazardous wastes but inject waste into or above underground sources of drinking water instead of below. EPA banned the use of Class IV wells in 1984. Class V wells are those used for all non-hazardous injections that are not covered by Classes I through IV. Examples of Class V wells include stormwater drainage wells and septic system leach fields. Finally, Class VI wells are used for the injection of carbon dioxide for sequestration, or long term storage. Since the introduction of Class VI in 2010, only two Class VI wells have been constructed as of 2022, both at the same Illinois facility; four other approved projects did not proceed to construction.
Injection-induced earthquakes
A July 2013 study by US Geological Survey scientist William Ellsworth links earthquakes to wastewater injection sites. In the four years from 2010-2013 the number of earthquakes of magnitude 3.0 or greater in the central and eastern United States increased dramatically. After decades of a steady earthquake rate (average of 21 events/year), activity increased starting in 2001 and peaked at 188 earthquakes in 2011, including a record-breaking 5.7-magnitude earthquake near Prague, Oklahoma which was the strongest earthquake ever recorded in Oklahoma. USGS scientists have found that at some locations the increase in seismicity coincides with the injection of wastewater in deep disposal wells. Injection-induced earthquakes are thought to be caused by pressure changes due to excess fluid injected deep below the surface and are being dubbed “man-made” earthquakes. On September 3, 2016, a 5.8-magnitude earthquake occurred near Pawnee, Oklahoma, followed by nine aftershocks between magnitudes 2.6 and 3.6 within three and one-half hours. The earthquake broke the previous record set five years earlier. Tremors were felt as far away as Memphis, Tennessee, and Gilbert, Arizona. Mary Fallin, the Oklahoma governor, declared a local emergency and shutdown orders for local disposal wells were ordered by the Oklahoma Corporation Commission. Results of ongoing multi-year research on induced earthquakes by the United States Geological Survey (USGS) published in 2015 suggested that most of the significant earthquakes in Oklahoma, such as the 1952 magnitude 5.5 El Reno earthquake may have been induced by deep injection of waste water by the oil industry.
Notes
References
US Army Environmental Center. Aberdeen Proving Ground, MD (2002). "Deep Well Injection." Remediation Technologies Screening Matrix and Reference Guide. 4th ed. Report no. SFIM-AEC-ET-CR-97053.
External links
EPA - Underground Injection Control Program
Drinking water
Hydrology
Water pollution
Petroleum technology
Natural gas technology | Injection well | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,981 | [
"Hydrology",
"Petroleum technology",
"Petroleum engineering",
"Water pollution",
"Water wells",
"Natural gas technology",
"Oil wells",
"Environmental engineering"
] |
16,342,785 | https://en.wikipedia.org/wiki/Wiener%20sausage | In the mathematical field of probability, the Wiener sausage is a neighborhood of the trace of a Brownian motion up to a time t, given by taking all points within a fixed distance of Brownian motion. It can be visualized as a sausage of fixed radius whose centerline is Brownian motion. The Wiener sausage was named after Norbert Wiener by because of its relation to the Wiener process; the name is also a pun on Vienna sausage, as "Wiener" is German for "Viennese".
The Wiener sausage is one of the simplest non-Markovian functionals of Brownian motion. Its applications include stochastic phenomena including heat conduction. It was first described by , and it was used by to explain results of a Bose–Einstein condensate, with proofs published by .
Definitions
The Wiener sausage Wδ(t) of radius δ and length t is the set-valued random variable on Brownian paths b (in some Euclidean space) defined by
is the set of points within a distance δ of some point b(x) of the path b with 0≤x≤t.
Volume
There has been a lot of work on the behavior of the volume (Lebesgue measure) |Wδ(t)| of the Wiener sausage as it becomes thin (δ→0); by rescaling, this is essentially equivalent to studying the volume as the sausage becomes long (t→∞).
showed that in 3 dimensions the expected value of the volume of the sausage is
In dimension d at least 3 the volume of the Wiener sausage is asymptotic to
as t tends to infinity. In dimensions 1 and 2 this formula gets replaced by and respectively. , a student of Spitzer, proved similar results for generalizations of Wiener sausages with cross sections given by more general compact sets than balls.
References
Especially chapter 22.
(Reprint of 1964 edition)
An advanced monograph covering the Wiener sausage.
Mathematical physics
Statistical mechanics
Wiener process | Wiener sausage | [
"Physics",
"Mathematics"
] | 397 | [
"Applied mathematics",
"Statistical mechanics",
"Theoretical physics",
"Mathematical physics"
] |
620,712 | https://en.wikipedia.org/wiki/Landau%E2%80%93Ramanujan%20constant | In mathematics and the field of number theory, the Landau–Ramanujan constant is the positive real number b that occurs in a theorem proved by Edmund Landau in 1908, stating that for large , the number of positive integers below that are the sum of two square numbers behaves asymptotically as
This constant b was rediscovered in 1913 by Srinivasa Ramanujan, in the first letter he wrote to G.H. Hardy.
Sums of two squares
By the sum of two squares theorem, the numbers that can be expressed as a sum of two squares of integers are the ones for which each prime number congruent to 3 mod 4 appears with an even exponent in their prime factorization. For instance, 45 = 9 + 36 is a sum of two squares; in its prime factorization, 32 × 5, the prime 3 appears with an even exponent, and the prime 5 is congruent to 1 mod 4, so its exponent can be odd.
Landau's theorem states that if is the number of positive integers less than that are the sum of two squares, then
,
where is the Landau–Ramanujan constant.
The Landau-Ramanujan constant can also be written as an infinite product:
History
This constant was stated by Landau in the limit form above; Ramanujan instead approximated as an integral, with the same constant of proportionality, and with a slowly growing error term.
References
Additive number theory
Analytic number theory
Mathematical constants
Srinivasa Ramanujan | Landau–Ramanujan constant | [
"Mathematics"
] | 308 | [
"Analytic number theory",
"Mathematical objects",
"nan",
"Mathematical constants",
"Numbers",
"Number theory"
] |
620,991 | https://en.wikipedia.org/wiki/Plummer%20model | The Plummer model or Plummer sphere is a density law that was first used by H. C. Plummer to fit observations of globular clusters. It is now often used as toy model in N-body simulations of stellar systems.
Description of the model
The Plummer 3-dimensional density profile is given by
where is the total mass of the cluster, and a is the Plummer radius, a scale parameter that sets the size of the cluster core. The corresponding potential is
where G is Newton's gravitational constant. The velocity dispersion is
The isotropic distribution function reads
if , and otherwise, where is the specific energy.
Properties
The mass enclosed within radius is given by
Many other properties of the Plummer model are described in Herwig Dejonghe's comprehensive article.
Core radius , where the surface density drops to half its central value, is at .
Half-mass radius is
Virial radius is .
The 2D surface density is:
and hence the 2D projected mass profile is:
In astronomy, it is convenient to define 2D half-mass radius which is the radius where the 2D projected mass profile is half of the total mass: .
For the Plummer profile: .
The escape velocity at any point is
For bound orbits, the radial turning points of the orbit is characterized by specific energy and specific angular momentum are given by the positive roots of the cubic equation
where , so that . This equation has three real roots for : two positive and one negative, given that , where is the specific angular momentum for a circular orbit for the same energy. Here can be calculated from single real root of the discriminant of the cubic equation, which is itself another cubic equation
where underlined parameters are dimensionless in Henon units defined as , , and .
Applications
The Plummer model comes closest to representing the observed density profiles of star clusters, although the rapid falloff of the density at large radii () is not a good description of these systems.
The behavior of the density near the center does not match observations of elliptical galaxies, which typically exhibit a diverging central density.
The ease with which the Plummer sphere can be realized as a Monte-Carlo model has made it a favorite choice of N-body experimenters, in spite of the model's lack of realism.
References
Astrophysics
Equations of astronomy | Plummer model | [
"Physics",
"Astronomy"
] | 471 | [
"Concepts in astronomy",
"Astronomical sub-disciplines",
"Astrophysics",
"Equations of astronomy"
] |
621,090 | https://en.wikipedia.org/wiki/Photophore | A photophore is a glandular organ that appears as luminous spots on marine animals, including fish and cephalopods. The organ can be simple, or as complex as the human eye, equipped with lenses, shutters, color filters, and reflectors; unlike an eye, however, it is optimized to produce light, not absorb it.
Mechanism
The bioluminescence can be produced from compounds during the digestion of prey, from specialized mitochondrial cells in the organism called photocytes ("light producing" cells), or, similarly, associated with symbiotic bacteria in the organism that are cultured.
The character of photophores is important in the identification of deep sea fishes. Photophores on fish are used for attracting food or for camouflage from predators by counter-illumination.
Photophores are found on some cephalopods including the firefly squid, which can create impressive light displays, as well as numerous other deep sea organisms, such as the pocket shark Mollisquama mississippiensis and the strawberry squid.
See also
Bioluminescence
Chromatophore
Chromophore, part of a molecule
References
External links
Bioluminescence
Cephalopod zootomy
Fish anatomy | Photophore | [
"Chemistry",
"Biology"
] | 251 | [
"Biochemistry",
"Luminescence",
"Bioluminescence"
] |
621,176 | https://en.wikipedia.org/wiki/Interpretability | In mathematical logic, interpretability is a relation between formal theories that expresses the possibility of interpreting or translating one into the other.
Informal definition
Assume T and S are formal theories. Slightly simplified, T is said to be interpretable in S if and only if the language of T can be translated into the language of S in such a way that S proves the translation of every theorem of T. Of course, there are some natural conditions on admissible translations here, such as the necessity for a translation to preserve the logical structure of formulas.
This concept, together with weak interpretability, was introduced by Alfred Tarski in 1953. Three other related concepts are cointerpretability, logical tolerance, and cotolerance, introduced by Giorgi Japaridze in 1992–93.
See also
Conservative extension
Interpretation (logic)
Interpretation (model theory)
Interpretability logic
References
Japaridze, G., and De Jongh, D. (1998) "The logic of provability" in Buss, S., ed., Handbook of Proof Theory. North-Holland: 476–546.
Alfred Tarski, Andrzej Mostowski, and Raphael Robinson (1953) Undecidable Theories. North-Holland.
Proof theory | Interpretability | [
"Mathematics"
] | 256 | [
"Mathematical logic",
"Proof theory"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.