id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
10,516,723 | https://en.wikipedia.org/wiki/Detrended%20fluctuation%20analysis | In stochastic processes, chaos theory and time series analysis, detrended fluctuation analysis (DFA) is a method for determining the statistical self-affinity of a signal. It is useful for analysing time series that appear to be long-memory processes (diverging correlation time, e.g. power-law decaying autocorrelation function) or 1/f noise.
The obtained exponent is similar to the Hurst exponent, except that DFA may also be applied to signals whose underlying statistics (such as mean and variance) or dynamics are non-stationary (changing with time). It is related to measures based upon spectral techniques such as autocorrelation and Fourier transform.
Peng et al. introduced DFA in 1994 in a paper that has been cited over 3,000 times as of 2022 and represents an extension of the (ordinary) fluctuation analysis (FA), which is affected by non-stationarities.
Definition
Algorithm
Given: a time series .
Compute its average value .
Sum it into a process . This is the cumulative sum, or profile, of the original time series. For example, the profile of an i.i.d. white noise is a standard random walk.
Select a set of integers, such that , the smallest , the largest , and the sequence is roughly distributed evenly in log-scale: . In other words, it is approximately a geometric progression.
For each , divide the sequence into consecutive segments of length . Within each segment, compute the least squares straight-line fit (the local trend). Let be the resulting piecewise-linear fit.
Compute the root-mean-square deviation from the local trend (local fluctuation):And their root-mean-square is the total fluctuation:
(If is not divisible by , then one can either discard the remainder of the sequence, or repeat the procedure on the reversed sequence, then take their root-mean-square.)
Make the log-log plot .
Interpretation
A straight line of slope on the log-log plot indicates a statistical self-affinity of form . Since monotonically increases with , we always have .
The scaling exponent is a generalization of the Hurst exponent, with the precise value giving information about the series self-correlations:
: anti-correlated
: uncorrelated, white noise
: correlated
: 1/f-noise, pink noise
: non-stationary, unbounded
: Brownian noise
Because the expected displacement in an uncorrelated random walk of length N grows like , an exponent of would correspond to uncorrelated white noise. When the exponent is between 0 and 1, the result is fractional Gaussian noise.
Pitfalls in interpretation
Though the DFA algorithm always produces a positive number for any time series, it does not necessarily imply that the time series is self-similar. Self-similarity requires the log-log graph to be sufficiently linear over a wide range of . Furthermore, a combination of techniques including maximum likelihood estimation (MLE), rather than least-squares has been shown to better approximate the scaling, or power-law, exponent.
Also, there are many scaling exponent-like quantities that can be measured for a self-similar time series, including the divider dimension and Hurst exponent. Therefore, the DFA scaling exponent is not a fractal dimension, and does not have certain desirable properties that the Hausdorff dimension has, though in certain special cases it is related to the box-counting dimension for the graph of a time series.
Generalizations
Generalization to polynomial trends (higher order DFA)
The standard DFA algorithm given above removes a linear trend in each segment. If we remove a degree-n polynomial trend in each segment, it is called DFAn, or higher order DFA.
Since is a cumulative sum of , a linear trend in is a constant trend in , which is a constant trend in (visible as short sections of "flat plateaus"). In this regard, DFA1 removes the mean from segments of the time series before quantifying the fluctuation.
Similarly, a degree n trend in is a degree (n-1) trend in . For example, DFA1 removes linear trends from segments of the time series before quantifying the fluctuation, DFA1 removes parabolic trends from , and so on.
The Hurst R/S analysis removes constant trends in the original sequence and thus, in its detrending it is equivalent to DFA1.
Generalization to different moments (multifractal DFA)
DFA can be generalized by computingthen making the log-log plot of , If there is a strong linearity in the plot of , then that slope is . DFA is the special case where .
Multifractal systems scale as a function . Essentially, the scaling exponents need not be independent of the scale of the system. In particular, DFA measures the scaling-behavior of the second moment-fluctuations.
Kantelhardt et al. intended this scaling exponent as a generalization of the classical Hurst exponent. The classical Hurst exponent corresponds to for stationary cases, and for nonstationary cases.
Applications
The DFA method has been applied to many systems, e.g. DNA sequences, neuronal oscillations, speech pathology detection, heartbeat fluctuation in different sleep stages, and animal behavior pattern analysis.
The effect of trends on DFA has been studied.
Relations to other methods, for specific types of signal
For signals with power-law-decaying autocorrelation
In the case of power-law decaying auto-correlations, the correlation function decays with an exponent :
.
In addition the power spectrum decays as .
The three exponents are related by:
and
.
The relations can be derived using the Wiener–Khinchin theorem. The relation of DFA to the power spectrum method has been well studied.
Thus, is tied to the slope of the power spectrum and is used to describe the color of noise by this relationship: .
For fractional Gaussian noise
For fractional Gaussian noise (FGN), we have , and thus , and , where is the Hurst exponent. for FGN is equal to .
For fractional Brownian motion
For fractional Brownian motion (FBM), we have , and thus , and , where is the Hurst exponent. for FBM is equal to . In this context, FBM is the cumulative sum or the integral of FGN, thus, the exponents of their
power spectra differ by 2.
See also
Multifractal system
Self-organized criticality
Self-affinity
Time series analysis
Hurst exponent
References
External links
Tutorial on how to calculate detrended fluctuation analysis in Matlab using the Neurophysiological Biomarker Toolbox.
FastDFA MATLAB code for rapidly calculating the DFA scaling exponent on very large datasets.
Physionet A good overview of DFA and C code to calculate it.
MFDFA Python implementation of (Multifractal) Detrended Fluctuation Analysis.
Autocorrelation
Fractals | Detrended fluctuation analysis | [
"Mathematics"
] | 1,483 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical objects",
"Fractals",
"Mathematical relations"
] |
10,518,546 | https://en.wikipedia.org/wiki/Technology%20life%20cycle | The technology life cycle (TLC) describes the commercial gain of a product through the expense of research and development phase, and the financial return during its "vital life". Some technologies, such as steel, paper or cement manufacturing, have a long lifespan (with minor variations in technology incorporated with time) while in other cases, such as electronic or pharmaceutical products, the lifespan may be quite short.
The TLC associated with a product or technological service is different from product life-cycle (PLC) dealt with in product life-cycle management. The latter is concerned with the life of a product in the marketplace with respect to timing of introduction, marketing measures, and business costs. The technology underlying the product (for example, that of a uniquely flavoured tea) may be quite marginal but the process of creating and managing its life as a branded product will be very different.
The technology life cycle is concerned with the time and cost of developing the technology, the timeline of recovering cost, and modes of making the technology yield a profit proportionate to the costs and risks involved. The TLC may, further, be protected during its cycle with patents and trademarks seeking to lengthen the cycle and to maximize the profit from it.
The product of the technology may be a commodity such as polyethylene plastic or a sophisticated product like the integrated circuits used in a smartphone.
The development of a competitive product or process can have a major effect on the lifespan of the technology, making it longer. Equally, the loss of intellectual property rights through litigation or loss of its secret elements (if any) through leakages also work to reduce a technology's lifespan. Thus, it is apparent that the management of the TLC is an important aspect of technology development.
Most new technologies follow a similar technology maturity life cycle describing the technological maturity of a product. This is not similar to a product life cycle, but applies to an entire technology, or a generation of a technology.
Technology adoption is the most common phenomenon driving the evolution of industries along the industry life cycle. After expanding new uses of resources they end with exhausting the efficiency of those processes, producing gains that are first easier and larger over time then exhaustingly more difficult, as the technology matures.
Four phases
The Soviet economist Nikolai Kondratiev was the first to observe technology life cycle in his book The Major Economic Cycles (1925). Today, these cycles are called Kondratiev wave, the predecessor of TLC. TLC is composed of four phases:
The research and development (R&D) phase (sometimes called the "bleeding edge") when incomes from inputs are negative and where the prospects of failure are high
The ascent phase when out-of-pocket costs have been recovered and the technology begins to gather strength by going beyond some Point A on the TLC (sometimes called the "leading edge")
The maturity phase when gain is high and stable, the region, going into saturation, marked by M, and
The decline (or decay phase), after a Point D, of reducing fortunes and utility of the technology.
S-curve
The shape of the technology life cycle is often referred to as S-curve.
Technology perception dynamics
There is usually technology hype at the introduction of any new technology, but only after some time has passed can it be judged as mere hype or justified true acclaim.
Because of the logistic curve nature of technology adoption, it is difficult to see in the early stages whether the hype is excessive.
Similarly, in the later stages, the opposite mistakes can be made relating to the possibilities of technology maturity and market saturation.
The technology adoption life cycle typically occurs in an S curve, as modelled in diffusion of innovations theory. This is because customers respond to new products in different ways. Diffusion of innovations theory, pioneered by Everett Rogers, posits that people have different levels of readiness for adopting new innovations and that the characteristics of a product affect overall adoption. Rogers classified individuals into five groups: innovators, early adopters, early majority, late majority, and laggards. In terms of the S curve, innovators occupy 2.5%, early adopters 13.5%, early majority 34%, late majority 34%, and laggards 16%.
The four stages of technology life cycle are as follows:
Innovation stage: This stage represents the birth of a new product, material of process resulting from R&D activities. In R&D laboratories, new ideas are generated depending on gaining needs and knowledge factors. Depending on the resource allocation and also the change element, the time taken in the innovation stage as well as in the subsequent stages varies widely.
Syndication stage: This stage represents the demonstration and commercialisation of a new technology, such as, product, material or process with potential for immediate utilisation. Many innovations are put on hold in R&D laboratories. Only a very small percentage of these are commercialised. Commercialisation of research outcomes depends on technical as well non-technical, mostly economic factors.
Diffusion stage: This represents the market penetration of a new technology through acceptance of the innovation, by potential users of the technology. But supply and demand side factors jointly influence the rate of diffusion.
Substitution stage: This last stage represents the decline in the use and eventual extension of a technology, due to replacement by another technology. Many technical and non-technical factors influence the rate of substitution. The time taken in the substitution stage depends on the market dynamics.
Licensing options
Large corporations develop technology for their own benefit and not with the objective of licensing. The tendency to license out technology only appears when there is a threat to the life of the TLC (business gain) as discussed later.
In the R&D phase
There are always smaller firms (SMEs) who are inadequately situated to finance the development of innovative R&D in the post-research and early technology phases. By sharing incipient technology under certain conditions, substantial risk financing can come from third parties. This is a form of quasi-licensing which takes different formats. Even large corporates may not wish to bear all costs of development in areas of significant and high risk (e.g. aircraft development) and may seek means of spreading it to the stage that proof-of-concept is obtained.
In the case of small and medium firms, entities such as venture capitalists or business angels, can enter the scene and help to materialize technologies. Venture capitalists accept both the costs and uncertainties of R&D, and that of market acceptance, in reward for high returns when the technology proves itself. Apart from finance, they may provide networking, management and marketing support. Venture capital connotes financial as well as human capital.
Larger firms may opt for Joint R&D or work in a consortium for the early phase of development. Such vehicles are called strategic alliances – strategic partnerships.
With both venture capital funding and strategic (research) alliances, when business gains begin to neutralize development costs (the TLC crosses the X-axis), the ownership of the technology starts to undergo change.
In the case of smaller firms, venture capitalists help clients enter the stock market for obtaining substantially larger funds for development, maturation of technology, product promotion and to meet marketing costs. A major route is through initial public offering (IPO) which invites risk funding by the public for potential high gain. At the same time, the IPOs enable venture capitalists to attempt to recover expenditures already incurred by them through part sale of the stock pre-allotted to them (subsequent to the listing of the stock on the stock exchange). When the IPO is fully subscribed, the assisted enterprise becomes a corporation and can more easily obtain bank loans, etc. if needed.
Strategic alliance partners, allied on research, pursue separate paths of development with the incipient technology of common origin but pool their accomplishments through instruments such as 'cross-licensing'. Generally, contractual provisions among the members of the consortium allow a member to exercise the option of independent pursuit after joint consultation; in which case the optee owns all subsequent development.
In the ascent phase
The ascent stage of the technology usually refers to some point above Point A in the TLC diagram but actually it commences when the R&D portion of the TLC curve inflects (only that the cashflow is negative and unremunerative to Point A). The ascent is the strongest phase of the TLC because it is here that the technology is superior to alternatives and can command premium profit or gain. The slope and duration of the ascent depends on competing technologies entering the domain, although they may not be as successful in that period. Strongly patented technology extends the duration period.
The TLC begins to flatten out (the region shown as M) when equivalent or challenging technologies come into the competitive space and begin to eat away marketshare.
Till this stage is reached, the technology-owning firm would tend to exclusively enjoy its profitability, preferring not to license it. If an overseas opportunity does present itself, the firm would prefer to set up a controlled subsidiary rather than license a third party.
In the maturity phase
The maturity phase of the technology is a period of stable and remunerative income but its competitive viability can persist over the larger timeframe marked by its 'vital life'. However, there may be a tendency to license out the technology to third parties during this stage to lower risk of decline in profitability (or competitivity) and to expand financial opportunity.
The exercise of this option is, generally, inferior to seeking participatory exploitation; in other words, engagement in joint venture, typically in regions where the technology would be in the ascent phase, as say, a developing country. In addition to providing financial opportunity it allows the technology-owner a degree of control over its use. Gain flows from the two streams of investment-based and royalty incomes. Further, the vital life of the technology is enhanced in such strategy.
In the decline phase
After reaching a point such as D in the above diagram, the earnings from the technology begin to decline rather rapidly. To prolong the life cycle, owners of technology might try to license it out at some point L when it can still be attractive to firms in other markets. This, then, traces the lengthening path, LL'. Further, since the decline is the result of competing rising technologies in this space, licenses may be attracted to the general lower cost of the older technology (than what prevailed during its vital life).
Licenses obtained in this phase are 'straight licenses'. They are free of direct control from the owner of the technology (as would otherwise apply, say, in the case of a joint-venture). Further, there may be fewer restrictions placed on the licensee in the employment of the technology.
The utility, viability, and thus the cost of straight-licenses depends on the estimated 'balance life' of the technology. For instance, should the key patent on the technology have expired, or would expire in a short while, the residual viability of the technology may be limited, although balance life may be governed by other criteria such as knowhow which could have a longer life if properly protected.
The license has no way of knowing the stage at which the prime, and competing technologies, are on their TLCs. It would be evident to competing licensor firms, and to the originator, from the growth, saturation or decline of the profitability of their operations.
The license may, however, be able to approximate the stage by vigorously negotiating with the licensor and competitors to determine costs and licensing terms. A lower cost, or easier terms, may imply a declining technology.
In any case, access to technology in the decline phase is a large risk that the licensee accepts. (In a joint-venture this risk is substantially reduced by licensor sharing it). Sometimes, financial guarantees from the licensor may work to reduce such risk and can be negotiated.
There are instances when, even though the technology declines to becoming a technique, it may still contain important knowledge or experience which the licensee firm cannot learn of without help from the originator. This is often the form that technical service and technical assistance contracts take (encountered often in developing country contracts). Alternatively, consulting agencies may fill this role.
Technology development cycle
According to the Encyclopedia of Earth, "In the simplest formulation, innovation can be thought of as being composed of research, development, demonstration, and deployment."
Technology development cycle describes the process of a new technology through the stages of technological maturity:
Research and development
Scientific demonstration
System deployment
Diffusion
See also
Background, foreground, sideground and postground intellectual property
Business cycle
Disruptive technology
Mass customization
Network effects
New product development
Technological revolution
Technological transitions
Technology acceptance model
Technology adoption life cycle
Technology readiness level (TRL)
Technology roadmap
Toolkits for user innovation
Open innovation
Frugal innovation
References
Diffusion
Innovation economics
Licensing
Product development
Product lifecycle management
Product management
Research and development
Science and technology studies
Sociology of culture
Technological change
Technology in society | Technology life cycle | [
"Physics",
"Chemistry",
"Technology"
] | 2,655 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion",
"Science and technology studies"
] |
10,523,311 | https://en.wikipedia.org/wiki/Edward%20Bromhead | Sir Edward Thomas ffrench Bromhead, 2nd Baronet FRS FRSE (26 March 1789 – 14 March 1855) was a British landowner and mathematician, best remembered as patron of the mathematician and physicist George Green and mentor of George Boole.
Life
Born the son of Gonville Bromhead, 1st Baronet Bromhead (grandfather of the British second in command of the same name at Rorke's Drift) and Lady Jane ffrench, Baroness ffrench, in Dublin. Bromhead was educated at the University of Glasgow and later at Caius College, Cambridge ( B.A. 1812, M.A. 1815) before taking up the study of law at the Inner Temple in London. He was elected a Fellow of the Royal Society in 1817. Returning to Lincolnshire, he became High Steward of Lincoln. He became the 2nd Bromhead baronet, of Thurlby Hall in 1822.
While at Cambridge, Bromhead was a founder of the Analytical Society, a precursor of the Cambridge Philosophical Society, together with John Herschel, George Peacock and Charles Babbage, with whom he maintained a close and lifelong friendship. While he was, by all accounts, a gifted mathematician in his own right (although ill-health prevented him from pursuing his studies further), his greatest contribution to the subject is at second hand: having subscribed to the first publication of self-taught mathematician and physicist George Green, he encouraged Green to continue his research and to write further papers (which Bromhead sent on to be published in the Transactions of the Cambridge Philosophical Society and those of the Royal Society of Edinburgh).
Bromhead repeated his success by encouraging the young George Boole from Lincoln. Bromhead was President of the Lincoln Mechanics Institute in the Lincoln Greyfriars, where George Boole's father was the curator. Boole first came to public notice when he gave a lecture on the work of Sir Isaac Newton on 5 February 1835. The young Boole's development was fed by books that Bromhead supplied.
Bromhead lost his sight when he was old and he died unmarried at his home of Thurlby Hall in Thurlby, North Kesteven on 14 March 1855.
Arms
Selected publications
X. Remarks on the present state of botanical classification Philosophical Magazine Series 3 Volume 11, Issue 64-65, 1837
XXVIII. Memoranda on the origin of the botanical alliances Philosophical Magazine Series 3 Volume 11, Issue 67, 1837
References
Bibliography
Mentions Bromhead's role in the career of George Green.
19th-century English mathematicians
Mathematical analysts
Alumni of Gonville and Caius College, Cambridge
Scientists from Dublin (city)
Bromhead, Sir Edward, 2nd Baronet
1789 births
1855 deaths
Fellows of the Royal Society
Fellows of the Royal Society of Edinburgh | Edward Bromhead | [
"Mathematics"
] | 570 | [
"Mathematical analysis",
"Mathematical analysts"
] |
10,523,316 | https://en.wikipedia.org/wiki/Degranulation | Degranulation is a cellular process that releases antimicrobial, cytotoxic, or other molecules from secretory vesicles called granules found inside some cells. It is used by several different cells involved in the immune system, including granulocytes (neutrophils, basophils, eosinophils, and mast cells). It is also used by certain lymphocytes such as natural killer (NK) cells and cytotoxic T cells, whose main purpose is to destroy invading microorganisms.
Mast cells
Degranulation in mast cells is part of an inflammatory response, and substances such as histamine are released. Granules from mast cells mediate processes such as "vasodilation, vascular homeostasis, innate and adaptive immune responses, angiogenesis, and venom detoxification."
Antigens interact with IgE molecules already bound to high affinity Fc receptors on the surface of mast cells to induce degranulation, via the activation of tyrosine kinases within the cell. The mast cell releases a mixture of compounds, including histamine, proteoglycans, serotonin, and serine proteases from its cytoplasmic granules.
Eosinophils
In a similar mechanism, activated eosinophils release preformed mediators such as major basic protein, and enzymes such as peroxidase, following interaction between their Fc receptors and IgE molecules that are bound to large parasites like helminths.
Neutrophils
Degranulation in neutrophils can occur in response to infection, and the resulting granules are released in order to protect against tissue damage. Excessive degranulation of neutrophils, sometimes triggered by bacteria, is associated with certain inflammatory disorders, such as asthma and septic shock.
Four kinds of granules exist in neutrophils that display differences in content and regulation. Secretory vesicles are the most likely to release their contents by degranulation, followed by gelatinase granules, specific granules, and azurophil granules.
Cytotoxic T cells and NK cells
Cytotoxic T cells and NK cells release molecules like perforin and granzymes by a process of directed exocytosis to kill infected target cells.
See also
Basophil activation
References
External links
Immunology
Cell biology
Biological processes | Degranulation | [
"Biology"
] | 508 | [
"Immunology",
"Cell biology",
"nan"
] |
8,135,659 | https://en.wikipedia.org/wiki/Particle%20decay | In particle physics, particle decay is the spontaneous process of one unstable subatomic particle transforming into multiple other particles. The particles created in this process (the final state) must each be less massive than the original, although the total mass of the system must be conserved. A particle is unstable if there is at least one allowed final state that it can decay into. Unstable particles will often have multiple ways of decaying, each with its own associated probability. Decays are mediated by one or several fundamental forces. The particles in the final state may themselves be unstable and subject to further decay.
The term is typically distinct from radioactive decay, in which an unstable atomic nucleus is transformed into a lighter nucleus accompanied by the emission of particles or radiation, although the two are conceptually similar and are often described using the same terminology.
Probability of survival and particle lifetime
Particle decay is a Poisson process, and hence the probability that a particle survives for time before decaying (the survival function) is given by an exponential distribution whose time constant depends on the particle's velocity:
where
is the mean lifetime of the particle (when at rest), and
is the Lorentz factor of the particle.
Table of some elementary and composite particle lifetimes
All data are from the Particle Data Group.
Decay rate
This section uses natural units, where
The lifetime of a particle is given by the inverse of its decay rate, , the probability per unit time that the particle will decay. For a particle of a mass and four-momentum decaying into particles with momenta , the differential decay rate is given by the general formula (expressing Fermi's golden rule)
where
is the number of particles created by the decay of the original,
is a combinatorial factor to account for indistinguishable final states (see below),
is the invariant matrix element or amplitude connecting the initial state to the final state (usually calculated using Feynman diagrams),
is an element of the phase space, and
is the four-momentum of particle .
The factor is given by
where
is the number of sets of indistinguishable particles in the final state, and
is the number of particles of type , so that
The phase space can be determined from
where
is a four-dimensional Dirac delta function,
is the (three-)momentum of particle , and
is the energy of particle .
One may integrate over the phase space to obtain the total decay rate for the specified final state.
If a particle has multiple decay branches or modes with different final states, its full decay rate is obtained by summing the decay rates for all branches. The branching ratio for each mode is given by its decay rate divided by the full decay rate.
Two-body decay
This section uses natural units, where
Decay rate
Say a parent particle of mass decays into two particles, labeled 1 and 2. In the rest frame of the parent particle,
which is obtained by requiring that four-momentum be conserved in the decay, i.e.
Also, in spherical coordinates,
Using the delta function to perform the and integrals in the phase-space for a two-body final state, one finds that the decay rate in the rest frame of the parent particle is
From two different frames
The angle of an emitted particle in the lab frame is related to the angle it has emitted in the center of momentum frame by the equation
Complex mass and decay rate
This section uses natural units, where
The mass of an unstable particle is formally a complex number, with the real part being its mass in the usual sense, and the imaginary part being its decay rate in natural units. When the imaginary part is large compared to the real part, the particle is usually thought of as a resonance more than a particle. This is because in quantum field theory a particle of mass (a real number) is often exchanged between two other particles when there is not enough energy to create it, if the time to travel between these other particles is short enough, of order according to the uncertainty principle. For a particle of mass , the particle can travel for time but decays after time of order of If then the particle usually decays before it completes its travel.
See also
Relativistic Breit-Wigner distribution
Particle physics
Particle radiation
List of particles
Weak interaction
Notes
External links
(See page 2).
Particle Data Group.
"The Particle Adventure" Particle Data Group, Lawrence Berkeley National Laboratory.
Particle physics | Particle decay | [
"Physics"
] | 890 | [
"Particle physics"
] |
8,136,831 | https://en.wikipedia.org/wiki/Hilbert%20basis%20%28linear%20programming%29 | The Hilbert basis of a convex cone C is a minimal set of integer vectors in C such that every integer vector in C is a conical combination of the vectors in the Hilbert basis with integer coefficients.
Definition
Given a lattice and a convex polyhedral cone with generators
we consider the monoid . By Gordan's lemma, this monoid is finitely generated, i.e., there exists a finite set of lattice points such that every lattice point is an integer conical combination of these points:
The cone C is called pointed if implies . In this case there exists a unique minimal generating set of the monoid —the Hilbert basis of C. It is given by the set of irreducible lattice points: An element is called irreducible if it can not be written as the sum of two non-zero elements, i.e., implies or .
References
Linear programming
Discrete geometry
Eponyms in geometry | Hilbert basis (linear programming) | [
"Mathematics"
] | 186 | [
"Eponyms in geometry",
"Discrete mathematics",
"Applied mathematics",
"Discrete geometry",
"Applied mathematics stubs",
"Geometry"
] |
8,138,006 | https://en.wikipedia.org/wiki/Hessian%20crucible | A Hessian crucible is a type of ceramic crucible that was manufactured in the Hesse region of Germany from the late Middle Ages through the Renaissance period. They were renowned for their ability to withstand very high temperatures, rapid changes in temperature, and strong reagents. These crucibles were widely used for alchemy and early metallurgy. Millions of the vessels were exported throughout Europe, Scandinavia, and the colonies in the Americas. The crucibles were made by firing kaolinitic clay at temperatures greater than 1100°C, forming mullite. Mullite is an aluminum silicate only described in the 20th century and is responsible for the excellent properties of the Hessian crucible.
Main production centre of the Hessian crucibles was the village of Großalmerode.
References
Alchemical tools
Laboratory equipment
Laboratory porcelainware
Analytical chemistry | Hessian crucible | [
"Chemistry"
] | 177 | [
"nan"
] |
8,139,076 | https://en.wikipedia.org/wiki/Germline%20development | In developmental biology, the cells that give rise to the gametes are often set aside during embryonic cleavage. During development, these cells will differentiate into primordial germ cells, migrate to the location of the gonad, and form the germline of the animal.
Creation of germ plasm and primordial germ cells
Cleavage in most animals segregates cells containing germ plasm from other cells. The germ plasm effectively turns off gene expression to render the genome of the cell inert. Cells expressing germ plasm become primordial germ cells (PGCs) which will then give rise to the gametes. The germ line development in mammals, on the other hand, occurs by induction and not by an endogenous germ plasm (see reference 6.).
Germ plasm in fruit fly
Germ plasm has been studied in detail in Drosophila. The posterior pole of the embryo contains necessary materials for the fertility of the fly. This cytoplasm, pole plasm, contains specialized materials called polar granules and the pole cells are the precursors to primordial germ cells.
Pole plasm is organized by and contains the proteins and mRNA of the posterior group genes (such as oskar, nanos gene, Tudor, vasa, and Valois). These genes play a role in germ line development to localize nanos mRNA to the posterior and localize germ cell determinants. Drosophila progeny with mutations in these genes fail to produce pole cells and are thus sterile, giving these mutations the name 'grandchildless'. The genes oskar, nanos and germ cell-less (gcl) have important roles. Oskar is sufficient to recruit the other genes to form functional germ plasm. Nanos is required to prevent mitosis and somatic differentiation and for the pole cells to migrate to function as PGCs (see next section). Gcl is necessary (but not sufficient) for pole cell formation. In addition to these genes, Pgc polar granule component blocks phosphorylation and consequently activation of RNA polymerase II and shuts down transcription.
Germ plasm in amphibians
Similar germ plasm has been identified in Amphibians in the polar cytoplasm at the vegetal pole. This cytoplasm moves to the bottom of the blastocoel and eventually ends up as its own subset of endodermal cells. While specified to produce germ cells, the germ plasm does not irreversibly commit these cells to produce gametes and no other cell type.
Migration of primordial germ cells
Fruit flies
The first phase of migration in Drosophila occurs when the pole cells move passively and infold into the midgut invagination. Active migration occurs through repellents and attractants. The expression of wunen in the endoderm repels the PGCs out. The expression of columbus and hedgehog attracts the PGCs to the mesodermal precursors of the gonad. Nanos is required during migration. Regardless of PGC injection site, PGCs are able to correctly migrate to their target sites.
Zebrafish
In zebrafish, the PGCs express two CXCR4 transmembrane receptor proteins. The signaling system involving this protein and its ligand, Sdf1, is necessary and sufficient to direct PGC migration in fish.
Frogs
In frogs, the PGCs migrate along the mesentery to the gonadal mesoderm facilitated by orientated extracellular matrix with fibronectin. There is also evidence for the CXCR4/Sdf1 system in frogs.
Birds
In birds, the PGCs arise from the epiblast and migrate to anteriorly of the primitive streak to the germinal crest. From there, they use blood vessels to find their way to the gonad. The CXCR4/Sdf1 system is also used, though may not be the only method necessary.
Mammals
In the mouse, primordial germ cells (PGCs) arise in the posterior primitive streak of the embryo and start to migrate around 6.25 days after conception. PGCs start to migrate to the embryonic endoderm and then to the hindgut and finally towards the future genital ridges where the somatic gonadal precursors reside. This migration requires a series of attractant and repellent cues as well as a number of adhesion molecules such as E-cadherin and β1-Integrin to guide the migration of PGCs. Around 10 days post conception; the PGCs occupy the genital ridge where they begin to lose their motility and polarized shape.
Germline development in mammals
Mammalian PGCs are specified by signalling between cells (induction), rather than by the segregation of germ plasm as the embryo divides. In mice, PGCs originate from the proximal epiblast, close to the extra-embryonic ectoderm (ExE), of the post-implantation embryo as early as embryonic day 6.5. By E7.5 a founding population of approximately 40 PGCs are generated in this region of the epiblast in the developing mouse embryo. The epiblast, however, also give rise to somatic cell lineages that make up the embryo proper; including the endoderm, ectoderm and mesoderm. The specification of primordial germ cells in mammals is mainly attributed to the downstream functions of two signaling pathways; the BMP signaling pathway and the canonical WNT/β-catenin pathway.
Bone morphogenetic protein 4 (BMP4) is released by the extra-embryonic ectoderm (ExE) at embryonic day 5.5 to 5.75 directly adjacent to the epiblast and causes the region of the epiblast nearest to the ExE to express Blimp1 and Prdm14 in a dose-dependent manner. This is evident as the number of PGCs forming in the epiblast decreases in proportion to the loss of BMP4 alleles. BMP4 acts through its downstream intercellular transcription factors SMAD1 and SMAD5. During approximately the same time, WNT3 starts to be expressed in the posterior visceral endoderm of the epiblast. WNT3 signalling has been shown to be essential in order for the epiblast to acquire responsiveness to the BMP4 signal from the ExE. WNT3 mutants fail to establish a primordial germ cell population, but this can be restored with exogenous WNT activity. The WNT3/β-catenin signalling pathway is essential for the expression of the transcription factor T (Brachyury), a transcription factor that was previously characterized somatic and mesoderm specific genes. T was recently found to be both necessary and sufficient to induce the expression of the known PGC specification genes Blimp1 and Prdm14. The induction of Transcription Factor T was seen 12 hours after BMP/WNT signaling, as opposed to the 24 to 36 hours it took for Blimp1 and Prdm14 genes to be expressed. Transcription factor T acts upstream of BLIMP1 and Prdm14 in PGC specification by binding to the genes respective enhancer elements. It is important to note that while T can activate the expression of Blimp1 and Prdm14 in the absence of both BMP4 and WNT3, pre-exposure of PGC progenitors to WNTs (without BMP4) prevents T from activating these genes. Details on how BMP4 prevents T from inducing mesodermal genes, and only activate PGC specification genes, remain unclear.
Expression of Blimp1 is the earliest known marker of PGC specification. A mutation in the Blimp1 gene results in the formation of PGC-like cells at embryonic day 8.5 that closely resemble their neighbouring somatic cells. A central role of Blimp 1 is the induction of Tcfap2c, a helix-span helix transcription factor. Tcfap2c mutants exhibited an early loss of primordial germ cells. Tcfap2c is thought to repress somatic gene expression, including the mesodermal marker Hoxb1. So, Blimp1, Tcfap2c and Prdm14 together are able to activate and repress the transcription of all the necessary genes to regulate PGC specification. Mutation of Prdm14 results in the formation of PGCs that are lost by embryonic day 11.5. The loss of PGCs in the Prdm14 mutant is due to failure in global erasure of histone 3 methylation patterns. Blimp1 and Prdm14 also elicit another epigenetic event that causes global DNA demethylation.
Other notable genes positively regulated by Blimp1 and Prdm14 are: Sox2, Nanos3, Nanog, Stella and Fragilis. At the same time, Blimp1 and Prdm14 also repress the transcription of programs that drive somatic differentiation by inhibiting transcription of the Hox family genes. In this way, Blimp1 and Prdm14 drive PGC specification by promoting germ line development and potential pluripotency transcriptional programs while also keeping the cells from taking on a somatic fate.
Generation of mammalian PGCs in vitro
With the vast knowledge about in-vivo PGC specification collected over the last few decades, several attempts to generate in-vitro PGCs from post-implantation epiblast were made. Various groups were able to successfully generate PGC-like cells, cultured in the presence of BMP4 and various cytokines. The efficiency of this process was later enhanced by the addition of stem cell factor (SCF), epidermal growth factor (EGF), leukaemia inhibitory factor (LIF) and BMP8B. PGC-like cells generated using this method can be transplanted into a gonad, where the differentiate, and are able to give viable gametes and offspring in vivo. PGC-like cells can also be generated from naïve embryonic stem cells (ESCs) that are cultured for two days in the presence of FGF and Activin-A to adopt an epiblast-like state. These cells are then cultured with BMP4, BMP8B, EGF, LIF and SCF and various cytokines for four more days. These in-vitro generated PGCs can also develop into viable gametes and offspring.
Differentiation of primordial germ cells
Prior to their arrival at the gonads, PGCs express pluripotency factors, generate pluripotent cell lines in cell culture (known as EG cells,) and can produce multi-lineage tumors, known as teratomas. Similar findings in other vertebrates indicate that PGCs are not yet irreversibly committed to produce gametes, and no other cell type. On arrival at the gonads, human and mouse PGCs activate widely conserved germ cell-specific factors, and subsequently down-regulate the expression of pluripotency factors. This transition results in the determination of germ cells, a form of cell commitment that is no longer reversible.
Prior to their occupation of the genital ridge, there is no known difference between XX and XY PGCs. However, once migration is complete and germ cell determination has occurred, these germline cells begin to differentiate according to the gonadal niche.
Early male differentiation
Male PGCs become known as gonocytes once they cease migration and undergo mitosis. The term gonocyte is generally used to describe all stages post PGC until the gonocytes differentiate into spermatogonia. Anatomically, gonocytes can be identified as large, euchromatic cells that often have two nucleoli in the nucleus.
In the male genital ridge, transient Sry expression causes supporting cells to differentiate into Sertoli cells which then act as the organizing center for testis differentiation. Point mutations or deletions in the human or mouse Sry coding region can lead to female development in XY individuals. Sertoli cells also act to prevent gonocytes from differentiating prematurely. They produce the enzyme CYP26B1 to counteract surrounding retinoic acid. Retinoic acid acts as a signal to the gonocytes to enter meiosis. The gonocyte and Sertoli cells have been shown to form gap and desmosomelike junctions as well as adherins junctions composed of cadherins and connexins. To differentiate into spermatogonia, the gonocytes must lose their junctions to Sertoli cells and become migratory once again. They migrate to the basement membrane of the seminiferous cord and differentiate.
Late differentiation
In the gonads, the germ cells undergo either spermatogenesis or oogenesis depending on whether the sex is male or female respectively.
Spermatogenesis
Mitotic germ stem cells, spermatogonia, divide by mitosis to produce spermatocytes committed to meiosis. The spermatocytes divide by meiosis to form spermatids. The post-meiotic spermatids differentiate through spermiogenesis to become mature and functional spermatozoa. Spermatogenic cells at different stages of development in the mouse have a frequency of mutation that is 5 to 10-fold lower than the mutation frequency in somatic cells.
In Drosophila, the ability of premeiotic male germ line cells to repair double-strand breaks declines dramatically with age. In mouse, spermatogenesis declines with advancing paternal age likely due to an increased frequency of meiotic errors.
Oogenesis
Mitotic germ stem cells, oogonia, divide by mitosis to produce primary oocytes committed to meiosis. Unlike sperm production, oocyte production is not continuous. These primary oocytes begin meiosis but pause in diplotene of meiosis I while in the embryo. All of the oogonia and many primary oocytes die before birth. After puberty in primates, small groups of oocytes and follicles prepare for ovulation by advancing to metaphase II. Only after fertilization is meiosis completed. Meiosis is asymmetric producing polar bodies and oocytes with large amounts of material for embryonic development. The mutation frequency of female mouse germ line cells, like male germ line cells, is also lower than that of somatic cells. Low germ line mutation frequency appears to be due, in part, to elevated levels of DNA repair enzymes that remove potentially mutagenic DNA damages. Enhanced genetic integrity may be a fundamental characteristic of germ line development.
See also
Germ cell
Germ cell tumor
References
Developmental biology
Germ line cells | Germline development | [
"Biology"
] | 3,111 | [
"Behavior",
"Developmental biology",
"Reproduction"
] |
8,140,063 | https://en.wikipedia.org/wiki/Thinkfree%20Office | Thinkfree Office is a web-based commercial office productivity suite developed by Thinkfree Inc. It includes Word (a word processor), Spreadsheet (a spreadsheet) and Presentation (a presentation program).
They are compatible with Microsoft Office's Word, PowerPoint, and Excel. It also features collaborative editing. The product is hosted on the client’s server.
Supported file formats
Thinkfree Office supports ISO/IEC international standard ISO/IEC 26300 Open Document Format for Office Applications (odf, odt, odp, ods, odg). It also supports Microsoft's XML formats (docx, pptx, xlsx) and Microsoft's legacy binary formats (doc, ppt, xls).
Naming
The software was previously marketed under different names, such as Thinkfree Server, Thinkfree Online, Hancom Office Online, and Hancom Office Web. Eventually, the brand was consolidated under the name Thinkfree Office.
History
In June 2000, Thinkfree Inc. released Thinkfree Office, based in Silicon Valley, California. It is recognized as the world's first online office editor (predating Google Docs and Microsoft 365) and attracted significant media coverage, including reports on CNN.
In 2001, Microsoft CEO Steve Ballmer highlighted Thinkfree as a significant competitor in a magazine interview, considering it a potential threat to his company, second only to Linux.
In November 2003, Hancom, a South Korean office software company, signed a memorandum of understanding and subsequently acquired Thinkfree.
In January 2004, Thinkfree expanded into other foreign markets. Subsidiary Haansoft USA, Inc. was created in San Jose, California to begin formal commercial operations in the US market. At the same time, a partnership was established with Riverdeep with the purpose of improving marketshare.
In February 2004, expansion into the Japanese market began. A commercial agency agreement was signed with PSI in Shinjuku, Japan, which allowed for localized distribution. In addition, a global agreement was entered into with Yamada Denki, one of the three main computer distributors in Japan, for a total of 180,000 units.
In May 2006, Thinkfree Office received the "Product of the Year" award at the Well-Connected Awards, USA.
In January 2009, Thinkfree Mobile was launched at CES 2009 in Las Vegas.
In April 2009, Thinkfree Live, Korea's first web office service, was launched.
In June 2018, a partnership was formed with Amazon Web Services to integrate Thinkfree Office into WorkDocs, an in-house office suite.
In October 2023, Hancom split its online office business unit as "Thinkfree Inc.".
References
2000 software
Android (operating system) software
Cloud computing
Collaborative real-time editors
Collaborative software
Desktop publishing software
Desktop publishing software for macOS
Desktop publishing software for Windows
Document management systems
Free desktop publishing software
Free groupware
Free PDF software
Free presentation software
Free software for cloud computing
Free software programmed in C++
Free software programmed in JavaScript
Free spreadsheet software
Free vector graphics editors
IOS software
IPadOS software
MacOS software
MacOS word processors
Office suites
Office suites for macOS
Office suites for Windows
Online office suites
Online spreadsheets
Online word processors
Presentation software
Presentation software for macOS
Presentation software for Windows
Spreadsheet software
Spreadsheet software for macOS
Spreadsheet software for Windows
Web applications
Windows word processors
Word processors | Thinkfree Office | [
"Mathematics",
"Technology"
] | 693 | [
"Collaborative real-time editors",
"Spreadsheet software",
"Mathematical software"
] |
8,140,616 | https://en.wikipedia.org/wiki/Dvoretzky%E2%80%93Kiefer%E2%80%93Wolfowitz%20inequality | In the theory of probability and statistics, the Dvoretzky–Kiefer–Wolfowitz–Massart inequality (DKW inequality) provides a bound on the worst case distance of an empirically determined distribution function from its associated population distribution function. It is named after Aryeh Dvoretzky, Jack Kiefer, and Jacob Wolfowitz, who in 1956 proved the inequality
with an unspecified multiplicative constant C in front of the exponent on the right-hand side.
In 1990, Pascal Massart proved the inequality with the sharp constant C = 2, confirming a conjecture due to Birnbaum and McCarty. In 2021, Michael Naaman proved the multivariate version of the DKW inequality and generalized Massart's tightness result to the multivariate case, which results in a sharp constant of twice the dimension k of the space in which the observations are found: C = 2k.
The DKW inequality
Given a natural number n, let X1, X2, …, Xn be real-valued independent and identically distributed random variables with cumulative distribution function F(·). Let Fn denote the associated empirical distribution function defined by
so is the probability that a single random variable is smaller than , and is the fraction of random variables that are smaller than .
The Dvoretzky–Kiefer–Wolfowitz inequality bounds the probability that the random function Fn differs from F by more than a given constant ε > 0 anywhere on the real line. More precisely, there is the one-sided estimate
which also implies a two-sided estimate
This strengthens the Glivenko–Cantelli theorem by quantifying the rate of convergence as n tends to infinity. It also estimates the tail probability of the Kolmogorov–Smirnov statistic. The inequalities above follow from the case where F corresponds to be the uniform distribution on [0,1]
as Fn has the same distributions as Gn(F) where Gn is the empirical distribution of
U1, U2, …, Un where these are independent and Uniform(0,1), and noting that
with equality if and only if F is continuous.
Multivariate case
In the multivariate case, X1, X2, …, Xn is an i.i.d. sequence of k-dimensional vectors. If Fn is the multivariate empirical cdf, then
for every ε, n, k > 0. The (n + 1) term can be replaced with a 2 for any sufficiently large n.
Kaplan–Meier estimator
The Dvoretzky–Kiefer–Wolfowitz inequality is obtained for the Kaplan–Meier estimator which is a right-censored data analog of the empirical distribution function
for every and for some constant , where is the Kaplan–Meier estimator, and is the censoring distribution function.
Building CDF bands
The Dvoretzky–Kiefer–Wolfowitz inequality is one method for generating CDF-based confidence bounds and producing a confidence band, which is sometimes called the Kolmogorov–Smirnov confidence band. The purpose of this confidence interval is to contain the entire CDF at the specified confidence level, while alternative approaches attempt to only achieve the confidence level on each individual point, which can allow for a tighter bound. The DKW bounds runs parallel to, and is equally above and below, the empirical CDF. The equally spaced confidence interval around the empirical CDF allows for different rates of violations across the support of the distribution. In particular, it is more common for a CDF to be outside of the CDF bound estimated using the DKW inequality near the median of the distribution than near the endpoints of the distribution.
The interval that contains the true CDF, , with probability is often specified as
which is also a special case of the asymptotic procedure for the multivariate case, whereby one uses the following critical value
for the multivariate test; one may replace 2k with k(n + 1) for a test that holds for all n; moreover, the multivariate test described by Naaman can be generalized to account for heterogeneity and dependence.
See also
Concentration inequality – a summary of bounds on sets of random variables.
References
Asymptotic theory (statistics)
Statistical inequalities
Empirical process | Dvoretzky–Kiefer–Wolfowitz inequality | [
"Mathematics"
] | 911 | [
"Theorems in statistics",
"Statistical inequalities",
"Inequalities (mathematics)"
] |
8,140,829 | https://en.wikipedia.org/wiki/FASEB%20Excellence%20in%20Science%20Award | The Excellence in Science Award was established by the Federation of American Societies for Experimental Biology (FASEB) in 1989 to recognize outstanding achievement by women in biological science. All women who are members of one or more of the societies of FASEB are eligible for nomination. Nominations recognize a woman whose career achievements have contributed significantly to further our understanding of a particular discipline by excellence in research.
The award includes a $10,000 unrestricted research grant, funded by Eli Lilly and Company.
Award recipients
Source: FASEB
1989 Marian Koshland
1990 Elizabeth Hay
1991 Ellen Vitetta
1992 Bettie Sue Masters
1993 Susan Leeman
1994 Lucille Shapiro
1995 Philippa Marrack
1996 Zena Werb
1997 Claude Klee
1998 Eva Neer
1999 Helen Blau
2000 Peng Loh
2001 Laurie Glimcher
2002 Phyllis Wise
2003 Joan A. Steitz
2004 Janet Rossant
2005 Anita Roberts
2006 Marilyn Farquhar and Elaine Fuchs
2007 Frances Arnold
2008 Mina J. Bissell
2009 Susan L. Lindquist
2010 Susan S. Taylor
2011 Gail R. Martin
2012 Susan R. Wessler
2013 Terry Orr-Weaver
2014 Kathryn V. Anderson
2015 Diane Griffin
2016 Bonnie Bassler
2017 Diane Mathis
2018 Lynne E. Maquat
2019 Barbara B. Kahn
2020 :
Lifetime Achievement : Brigid Hogan
Mid-Career Investigator : Aviv Regev
Early-Career Investigator : Karen Schindler
2021:
Lifetime Achievement : M. Celeste Simon
Mid-Career Investigator : Valentina Greco
Early-Career Investigator : Cigall Kadoch
2022:
Lifetime Achievement : Arlene H. Sharpe
Mid-Career Investigator : Sallie R. Permar
Early-Career Investigator : Smita Krishnaswamy
2023:
Lifetime Achievement : Elaine S. Jaffe
Mid-Career Investigator : Paola Arlotta
Early-Career Investigator : Diana Libuda
See also
List of biology awards
References
Biology awards
Science awards honoring women
American science and technology awards
Awards established in 1989
1989 establishments in the United States | FASEB Excellence in Science Award | [
"Technology"
] | 395 | [
"Science and technology awards",
"Biology awards",
"Science awards honoring women"
] |
8,142,356 | https://en.wikipedia.org/wiki/Indoleamine%202%2C3-dioxygenase | Indoleamine-pyrrole 2,3-dioxygenase (IDO or INDO ) is a heme-containing enzyme physiologically expressed in a number of tissues and cells, such as the small intestine, lungs, female genital tract or placenta. In humans is encoded by the IDO1 gene. IDO is involved in tryptophan metabolism. It is one of three enzymes that catalyze the first and rate-limiting step in the kynurenine pathway, the O2-dependent oxidation of -tryptophan to N-formylkynurenine, the others being indolamine-2,3-dioxygenase 2 (IDO2) and tryptophan 2,3-dioxygenase (TDO). IDO is an important part of the immune system and plays a part in natural defense against various pathogens. It is produced by the cells in response to inflammation and has an immunosuppressive function because of its ability to limit T-cell function and engage mechanisms of immune tolerance. Emerging evidence suggests that IDO becomes activated during tumor development, helping malignant cells escape eradication by the immune system. Expression of IDO has been described in a number of types of cancer, such as acute myeloid leukemia, ovarian cancer or colorectal cancer. IDO is part of the malignant transformation process and plays a key role in suppressing the anti-tumor immune response in the body, so inhibiting it could increase the effect of chemotherapy as well as other immunotherapeutic protocols. Furthermore, there is data implicating a role for IDO1 in the modulation of vascular tone in conditions of inflammation via a novel pathway involving singlet oxygen.
Physiological function
Indoleamine 2,3-dioxygenase is the first and rate-limiting enzyme of tryptophan catabolism through the kynurenine pathway.
IDO is an important molecule in the mechanisms of tolerance and its physiological functions include the suppression of potentially dangerous inflammatory processes in the body. IDO also plays a role in natural defense against microorganisms. Expression of IDO is induced by interferon-gamma, which explains why the expression increases during inflammatory diseases or even during tumorigenesis. Since tryptophan is essential for the survival of pathogens, the activity of enzyme IDO destroys them. Microorganisms susceptible to tryptophan deficiency include bacteria of genus Streptococcus or viruses such as herpes simplex or measles.
One of the organs with high IDO expression is the placenta. In the 1990s, the immunosuppressive function of this enzyme was first described in mice due to the study of placental tryptophan metabolism. Thus, mammalian placenta, due to intensive tryptophan catabolism has the ability to suppress T cell activity, thereby contributing to its position of immunologically privileged tissue.
Clinical significance
IDO is an immune checkpoint molecule in the sense that it is an immunomodulatory enzyme produced by alternatively activated macrophages and other immunoregulatory cells. IDO is known to suppress T and NK cells, generate Tregs and myeloid-derived suppressor cells, and also supports angiogenesis.
These mechanisms are crucial in the process of carcinogenesis. IDO allows tumor cells to escape the immune system by two main mechanisms. The first mechanism is based on tryptophan depletion from the tumor microenvironment. The second mechanism is based on the production of catabolic products called kynurenins, that are cytotoxic for T lymphocytes and NK cells. Overexpression of human IDO (hIDO) is described in a variety of human tumor cell lineages and is often associated with poor prognosis. Tumors with increased production of IDO include prostate, ovarian, lung or pancreatic cancer or acute myeloid leukemia. Expression of IDO is under physiological conditions regulated by the Bin1 gene, which can be damaged by tumor transformation.
Emerging clinical studies suggest that combination of IDO inhibitors with classical chemotherapy and radiotherapy could restore immune control and provide a therapeutic response to generally resistant tumors. Enzyme IDO used by tumors to escape immune surveillance is currently in focus of research and drug discovery efforts, as well as efforts to understand if it could be used as a biomarker for prognosis.
Inhibitors
COX-2 inhibitors down-regulate indoleamine 2,3-dioxygenase, leading to a reduction in kynurenine levels as well as reducing proinflammatory cytokine activity.
1-Methyltryptophan is a racemic compound that weakly inhibits indoleamine dioxygenase, but is also a very slow substrate. The specific racemer 1-methyl--tryptophan (known as indoximod) is in clinical trials for various cancers.
Epacadostat (INCB24360), navoximod (GDC-0919), and linrodostat (BMS-986205) are potent inhibitors of the indoleamine 2,3-dioxygenase enzyme and are in clinical trials for various cancers.
See also
1-Methyltryptophan
Tryptophan 2,3-dioxygenase
References
External links
PDBe-KB provides an overview of all the structure information available in the PDB for Human Indoleamine 2,3-dioxygenase 1
EC 1.13.11
Immune system | Indoleamine 2,3-dioxygenase | [
"Biology"
] | 1,161 | [
"Immune system",
"Organ systems"
] |
8,142,499 | https://en.wikipedia.org/wiki/Situational%20application | In computing, a situational application is "good enough" software created for a narrow group of users with a unique set of needs. The application typically (but not always) has a short life span, and is often created within the group where it is used, sometimes by the users themselves. As the requirements of a small team using the application change, the situational application often also continues to evolve to accommodate these changes. Although situational applications are specifically designed to embrace change, significant changes in requirements may lead to an abandonment of the situational application altogether – in some cases it is just easier to develop a new one than to evolve the one in use.
Characteristics
Situational applications are developed fast, easy to use, uncomplicated, and serve a unique set of requirements. They have a narrow focus on a specific business problem, and they are written in a way where if the business problem changes rapidly, so can the situational application.
This contrasts with more common enterprise applications, which are designed to address a large set of business problems, require meticulous planning, and impose a sometimes-slow and often-meticulous change process.
Origination
Clay Shirky in his essay entitled "Situated Software" described a type of software that "...is designed for use by a specific social group, rather than for a generic set of "users"." IBM later morphed the term into "situational applications".
Evolution
The successful large-scale implementation of a situational application environment in an organization requires a strategy, mindset, methodology and support structure quite different from traditional application development. This is now evolving as more companies learn how to best leverage the ideas behind situational applications. In addition, the advent of cloud-based application development and deployment platforms makes the implementation of a comprehensive situational application environment much more feasible.
Examples
A structured wiki that can host wiki applications lends itself to creation of situational applications. Some mashups can also be considered situational applications. A forms application such as a Microsoft Access Database (MDB file) can be considered a situational application.
The latest implementations of situational application environments include Longjump, Force.com and WorkXpress.
See also
End user development
Mashup (web application hybrid)
Wiki application
References
External links
Luba Cherbakov, Andy Bravery, Aroop Pandya. SOA meets situational applications, 3 part series
Situational Applications: When the situation demands faster turnaround than IT can provide
Luba Cherbakov, Andy Bravery, Aroop Pandya. Changing the corporate IT development model: Tapping the power of grassroots computing, IBM Systems Journal
Software architecture
Web development | Situational application | [
"Engineering"
] | 542 | [
"Software engineering",
"Web development"
] |
8,143,783 | https://en.wikipedia.org/wiki/New%20Technologies%20Demonstrator%20Programme | The New Technologies Demonstrator Programme is a scheme part of Defra's Waste Implementation Programme, New Technologies Workstream, to demonstrate advanced solid waste processing technologies in England. A pot of £30million was allocated to fund 10 demonstrator projects with the project being headed by Dave Brooks at Defra. The scheme is not on schedule for the ambitious targets that were initially set out by Defra, however 9 projects out of the initial 10 are now projected to be operational by April 2009, over 2 years behind schedule.
The scheme
The scheme initially was allocated £32 million, of which £2 million was to help fund research and development into waste technology. The scheme for the distribution of the main £30 million pot commenced in 2004 and was originally split into two rounds:
ROUND 1: 5 demonstrator projects in operation by 31 December 2005
ROUND 2: 5 demonstrator projects in operation by 31 December 2006
Their project had a huge response for the first round, with 71 pre-qualification questionnaire submissions being filed from interested parties. The quality of some of the initial bids were criticised by Martin Brockelhurst, Head of Waste Strategy, at the Environment Agency who remarked some of the applications were poor and came from a "young industry".
Controversy
There have been concerns that the project is taking too long and some participants threatened to walk out. On 11 April 2006, Defra declared that its initial timescales were ambitious and projects were not on target. Of the 10 original projects planned a total of 9 have now been signed and includes gasification, in-vessel composting, anaerobic digestion and mechanical heat treatment. From the original target dates for operational demonstrator plants outlined in the initial assessment criteria only 2 projects are now operational (true as of 27 November 2006). On 24 November 2006, Dave Brooks announced that the new target for all plants being operational is April 2009.
The projects
Operational
Greenfinch anaerobic digesters, Ludlow, Shropshire
Bioganix in-vessel composting plant, Leominster
Fairport Engineering, mechanical heat treatment, Merseyside
Energos gasification plant, Isle of Wight (currently under reconstruction until 2018)
Contracts signed
ADAS/Envar in-vessel composing plant, St Ives, Cambridgeshire
Premier Waste aerobic digestion plant, Durham
Abandoned or Cancelled
Novera gasification plant, Dagenham Novera withdrew from the DEFRA scheme in 2007.
Compact Power gasification plant, Avonmouth
Yorwaste gasification plant, Seamer Carr, Scarborough
See also
Isle of Wight gasification facility
References
Bioenergy in the United Kingdom
Waste treatment technology
Waste management in the United Kingdom | New Technologies Demonstrator Programme | [
"Chemistry",
"Engineering"
] | 536 | [
"Water treatment",
"Waste treatment technology",
"Environmental engineering"
] |
8,145,410 | https://en.wikipedia.org/wiki/Charles%20Darwin | Charles Robert Darwin ( ; 12 February 1809 – 19 April 1882) was an English naturalist, geologist, and biologist, widely known for his contributions to evolutionary biology. His proposition that all species of life have descended from a common ancestor is now generally accepted and considered a fundamental scientific concept. In a joint presentation with Alfred Russel Wallace, he introduced his scientific theory that this branching pattern of evolution resulted from a process he called natural selection, in which the struggle for existence has a similar effect to the artificial selection involved in selective breeding. Darwin has been described as one of the most influential figures in human history and was honoured by burial in Westminster Abbey.
Darwin's early interest in nature led him to neglect his medical education at the University of Edinburgh; instead, he helped to investigate marine invertebrates. His studies at the University of Cambridge's Christ's College from 1828 to 1831 encouraged his passion for natural science. However, it was his five-year voyage on from 1831 to 1836 that truly established Darwin as an eminent geologist. The observations and theories he developed during his voyage supported Charles Lyell's concept of gradual geological change. Publication of his journal of the voyage made Darwin famous as a popular author.
Puzzled by the geographical distribution of wildlife and fossils he collected on the voyage, Darwin began detailed investigations and, in 1838, devised his theory of natural selection. Although he discussed his ideas with several naturalists, he needed time for extensive research, and his geological work had priority. He was writing up his theory in 1858 when Alfred Russel Wallace sent him an essay that described the same idea, prompting the immediate joint submission of both their theories to the Linnean Society of London. Darwin's work established evolutionary descent with modification as the dominant scientific explanation of natural diversification. In 1871, he examined human evolution and sexual selection in The Descent of Man, and Selection in Relation to Sex, followed by The Expression of the Emotions in Man and Animals (1872). His research on plants was published in a series of books, and in his final book, The Formation of Vegetable Mould, through the Actions of Worms (1881), he examined earthworms and their effect on soil.
Darwin published his theory of evolution with compelling evidence in his 1859 book On the Origin of Species. By the 1870s, the scientific community and a majority of the educated public had accepted evolution as a fact. However, many initially favoured competing explanations that gave only a minor role to natural selection, and it was not until the emergence of the modern evolutionary synthesis from the 1930s to the 1950s that a broad consensus developed in which natural selection was the basic mechanism of evolution. Darwin's scientific discovery is the unifying theory of the life sciences, explaining the diversity of life.
Biography
Early life and education
Darwin was born in Shrewsbury, Shropshire, on 12 February 1809, at his family's home, The Mount. He was the fifth of six children of wealthy society doctor and financier Robert Darwin and Susannah Darwin (née Wedgwood). His grandfathers Erasmus Darwin and Josiah Wedgwood were both prominent abolitionists. Erasmus Darwin had praised general concepts of evolution and common descent in his Zoonomia (1794), a poetic fantasy of gradual creation including undeveloped ideas anticipating concepts his grandson expanded.
Both families were largely Unitarian, though the Wedgwoods were adopting Anglicanism. Robert Darwin, a freethinker, had baby Charles baptised in November 1809 in the Anglican St Chad's Church, Shrewsbury, but Charles and his siblings attended the local Unitarian Church with their mother. The eight-year-old Charles already had a taste for natural history and collecting when he joined the day school run by its preacher in 1817. That July, his mother died. From September 1818, he joined his older brother Erasmus in attending the nearby Anglican Shrewsbury School as a boarder.
Darwin spent the summer of 1825 as an apprentice doctor, helping his father treat the poor of Shropshire, before going to the well-regarded University of Edinburgh Medical School with his brother Erasmus in October 1825. Darwin found lectures dull and surgery distressing, so he neglected his studies. He learned taxidermy in around 40 daily hour-long sessions from John Edmonstone, a freed black slave who had accompanied Charles Waterton in the South American rainforest.
In Darwin's second year at the university, he joined the Plinian Society, a student natural-history group featuring lively debates in which radical democratic students with materialistic views challenged orthodox religious concepts of science. He assisted Robert Edmond Grant's investigations of the anatomy and life cycle of marine invertebrates in the Firth of Forth, and on 27 March 1827 presented at the Plinian his own discovery that black spores found in oyster shells were the eggs of a skate leech. One day, Grant praised Lamarck's evolutionary ideas. Darwin was astonished by Grant's audacity, but had recently read similar ideas in his grandfather Erasmus' journals. Darwin was rather bored by Robert Jameson's natural-history course, which covered geologyincluding the debate between neptunism and plutonism. He learned the classification of plants and assisted with work on the collections of the University Museum, one of the largest museums in Europe at the time.
Darwin's neglect of medical studies annoyed his father, who sent him to Christ's College, Cambridge, in January 1828, to study for a Bachelor of Arts degree as the first step towards becoming an Anglican country parson. Darwin was unqualified for Cambridge's Tripos exams and was required instead to join the ordinary degree course. He preferred riding and shooting to studying.
During the first few months of Darwin's enrolment at Christ's College, his second cousin William Darwin Fox was still studying there. Fox impressed him with his butterfly collection, introducing Darwin to entomology and influencing him to pursue beetle collecting. He did this zealously and had some of his finds published in James Francis Stephens' Illustrations of British entomology (1829–1932).
Through Fox, Darwin became a close friend and follower of botany professor John Stevens Henslow. He met other leading parson-naturalists who saw scientific work as religious natural theology, becoming known to these dons as "the man who walks with Henslow". When his own exams drew near, Darwin applied himself to his studies and was delighted by the language and logic of William Paley's Evidences of Christianity (1795). In his final examination in January 1831, Darwin did well, coming tenth out of 178 candidates for the ordinary degree.
Darwin had to stay at Cambridge until June 1831. He studied Paley's Natural Theology or Evidences of the Existence and Attributes of the Deity (first published in 1802), which made an argument for divine design in nature, explaining adaptation as God acting through laws of nature. He read John Herschel's new book, Preliminary Discourse on the Study of Natural Philosophy (1831), which described the highest aim of natural philosophy as understanding such laws through inductive reasoning based on observation, and Alexander von Humboldt's Personal Narrative of scientific travels in 1799–1804. Inspired with "a burning zeal" to contribute, Darwin planned to visit Tenerife with some classmates after graduation to study natural history in the tropics. In preparation, he joined Adam Sedgwick's geology course, then on 4 August travelled with him to spend a fortnight mapping strata in Wales.
Survey voyage on HMS Beagle
After leaving Sedgwick in Wales, Darwin spent a few days with student friends at Barmouth. He returned home on 29 August to find a letter from Henslow proposing him as a suitable (if unfinished) naturalist for a self-funded supernumerary place on with captain Robert FitzRoy, a position for a gentleman rather than "a mere collector". The ship was to leave in four weeks on an expedition to chart the coastline of South America. Robert Darwin objected to his son's planned two-year voyage, regarding it as a waste of time, but was persuaded by his brother-in-law, Josiah Wedgwood II, to agree to (and fund) his son's participation. Darwin took care to remain in a private capacity to retain control over his collection, intending it for a major scientific institution.
After delays, the voyage began on 27 December 1831; it lasted almost five years. As FitzRoy had intended, Darwin spent most of that time on land investigating geology and making natural history collections, while HMS Beagle surveyed and charted coasts. He kept careful notes of his observations and theoretical speculations. At intervals during the voyage, his specimens were sent to Cambridge together with letters including a copy of his journal for his family. He had some expertise in geology, beetle collecting and dissecting marine invertebrates, but in all other areas, was a novice and ably collected specimens for expert appraisal. Despite suffering badly from seasickness, Darwin wrote copious notes while on board the ship. Most of his zoology notes are about marine invertebrates, starting with plankton collected during a calm spell.
On their first stop ashore at St Jago in Cape Verde, Darwin found that a white band high in the volcanic rock cliffs included seashells. FitzRoy had given him the first volume of Charles Lyell's Principles of Geology, which set out uniformitarian concepts of land slowly rising or falling over immense periods, and Darwin saw things Lyell's way, theorising and thinking of writing a book on geology. When they reached Brazil, Darwin was delighted by the tropical forest, but detested the sight of slavery there, and disputed this issue with FitzRoy.
The survey continued to the south in Patagonia. They stopped at Bahía Blanca, and in cliffs near Punta Alta Darwin made a major find of fossil bones of huge extinct mammals beside modern seashells, indicating recent extinction with no signs of change in climate or catastrophe. He found bony plates like a giant version of the armour on local armadillos. From a jaw and tooth he identified the gigantic Megatherium, then from Cuvier's description thought the armour was from this animal. The finds were shipped to England, and scientists found the fossils of great interest. In Patagonia, Darwin came to wrongly believe the territory was devoid of reptiles.
On rides with gauchos into the interior to explore geology and collect more fossils, Darwin gained social, political and anthropological insights into both native and colonial people at a time of revolution, and learnt that two types of rhea had separate but overlapping territories. Further south, he saw stepped plains of shingle and seashells as raised beaches at a series of elevations. He read Lyell's second volume and accepted its view of "centres of creation" of species, but his discoveries and theorising challenged Lyell's ideas of smooth continuity and of extinction of species.
Three Fuegians on board, who had been seized during the first Beagle voyage then given Christian education in England, were returning with a missionary. Darwin found them friendly and civilised, yet at Tierra del Fuego he met "miserable, degraded savages", as different as wild from domesticated animals. He remained convinced that, despite this diversity, all humans were interrelated with a shared origin and potential for improvement towards civilisation. Unlike his scientist friends, he now thought there was no unbridgeable gap between humans and animals. A year on, the mission had been abandoned. The Fuegian they had named Jemmy Button lived like the other natives, had a wife, and had no wish to return to England.
Darwin experienced an earthquake in Chile in 1835 and saw signs that the land had just been raised, including mussel-beds stranded above high tide. High in the Andes he saw seashells and several fossil trees that had grown on a sand beach. He theorised that as the land rose, oceanic islands sank, and coral reefs round them grew to form atolls.
On the geologically new Galápagos Islands, Darwin looked for evidence attaching wildlife to an older "centre of creation", and found mockingbirds allied to those in Chile but differing from island to island. He heard that slight variations in the shape of tortoise shells showed which island they came from, but failed to collect them, even after eating tortoises taken on board as food. In Australia, the marsupial rat-kangaroo and the platypus seemed so unusual that Darwin thought it was almost as though two distinct Creators had been at work. He found the Aborigines "good-humoured & pleasant", their numbers depleted by European settlement.
FitzRoy investigated how the atolls of the Cocos (Keeling) Islands had formed, and the survey supported Darwin's theorising. FitzRoy began writing the official Narrative of the Beagle voyages, and after reading Darwin's diary, he proposed incorporating it into the account. Darwin's Journal was eventually rewritten as a separate third volume, on geology and natural history.
In Cape Town, South Africa, Darwin and FitzRoy met John Herschel, who had recently written to Lyell praising his uniformitarianism as opening bold speculation on "that mystery of mysteries, the replacement of extinct species by others" as "a natural in contradistinction to a miraculous process".
When organising his notes as the ship sailed home, Darwin wrote that, if his growing suspicions about the mockingbirds, the tortoises and the Falkland Islands fox were correct, "such facts undermine the stability of Species", then cautiously added "would" before "undermine". He later wrote that such facts "seemed to me to throw some light on the origin of species".
Without telling Darwin, extracts from his letters to Henslow had been read to scientific societies, printed as a pamphlet for private distribution among members of the Cambridge Philosophical Society, and reported in magazines, including The Athenaeum. Darwin first heard of this at Cape Town, and at Ascension Island read of Sedgwick's prediction that Darwin "will have a great name among the Naturalists of Europe".
Inception of Darwin's evolutionary theory
On 2 October 1836, Beagle anchored at Falmouth, Cornwall. Darwin promptly made the long coach journey to Shrewsbury to visit his home and see relatives. He then hurried to Cambridge to see Henslow, who advised him on finding available naturalists to catalogue Darwin's animal collections and to take on the botanical specimens. Darwin's father organised investments, enabling his son to be a self-funded gentleman scientist, and an excited Darwin went around the London institutions being fêted and seeking experts to describe the collections. British zoologists at the time had a huge backlog of work, due to natural history collecting being encouraged throughout the British Empire, and there was a danger of specimens just being left in storage.
Charles Lyell eagerly met Darwin for the first time on 29 October and soon introduced him to the up-and-coming anatomist Richard Owen, who had the facilities of the Royal College of Surgeons to work on the fossil bones collected by Darwin. Owen's surprising results included other gigantic extinct ground sloths as well as the Megatherium Darwin had identified, a near complete skeleton of the unknown Scelidotherium and a hippopotamus-sized rodent-like skull named Toxodon resembling a giant capybara. The armour fragments were actually from Glyptodon, a huge armadillo-like creature, as Darwin had initially thought. These extinct creatures were related to living species in South America.
In mid-December, Darwin took lodgings in Cambridge to arrange expert classification of his collections, and prepare his own research for publication. Questions of how to combine his diary into the Narrative were resolved at the end of the month when FitzRoy accepted Broderip's advice to make it a separate volume, and Darwin began work on his Journal and Remarks.
Darwin's first paper showed that the South American landmass was slowly rising. With Lyell's enthusiastic backing, he read it to the Geological Society of London on 4 January 1837. On the same day, he presented his mammal and bird specimens to the Zoological Society. The ornithologist John Gould soon announced that the Galápagos birds that Darwin had thought a mixture of blackbirds, "gros-beaks" and finches, were, in fact, twelve separate species of finches. On 17 February, Darwin was elected to the Council of the Geological Society, and Lyell's presidential address presented Owen's findings on Darwin's fossils, stressing geographical continuity of species as supporting his uniformitarian ideas.
Early in March, Darwin moved to London to be near this work, joining Lyell's social circle of scientists and experts such as Charles Babbage, who described God as a programmer of laws. Darwin stayed with his freethinking brother Erasmus, part of this Whig circle and a close friend of the writer Harriet Martineau, who promoted the Malthusianism that underpinned the controversial Whig Poor Law reforms to stop welfare from causing overpopulation and more poverty. As a Unitarian, she welcomed the radical implications of transmutation of species, promoted by Grant and younger surgeons influenced by Geoffroy. Transmutation was anathema to Anglicans defending social order, but reputable scientists openly discussed the subject, and there was wide interest in John Herschel's letter praising Lyell's approach as a way to find a natural cause of the origin of new species.
Gould met Darwin and told him that the Galápagos mockingbirds from different islands were separate species, not just varieties, and what Darwin had thought was a "wren" was in the finch group. Darwin had not labelled the finches by island, but from the notes of others on the ship, including FitzRoy, he allocated species to islands. The two rheas were distinct species, and on 14 March Darwin announced how their distribution changed going southwards.
By mid-March 1837, barely six months after his return to England, Darwin was speculating in his Red Notebook on the possibility that "one species does change into another" to explain the geographical distribution of living species such as the rheas, and extinct ones such as the strange extinct mammal Macrauchenia, which resembled a giant guanaco, a llama relative. Around mid-July, he recorded in his "B" notebook his thoughts on lifespan and variation across generationsexplaining the variations he had observed in Galápagos tortoises, mockingbirds, and rheas. He sketched branching descent, and then a genealogical branching of a single evolutionary tree, in which "It is absurd to talk of one animal being higher than another", thereby discarding Lamarck's idea of independent lineages progressing to higher forms.
Overwork, illness, and marriage
While developing this intensive study of transmutation, Darwin became mired in more work. Still rewriting his Journal, he took on editing and publishing the expert reports on his collections, and with Henslow's help obtained a Treasury grant of £1,000 to sponsor this multi-volume Zoology of the Voyage of H.M.S. Beagle, a sum equivalent to about £115,000 in 2021. He stretched the funding to include his planned books on geology, and agreed to unrealistic dates with the publisher. As the Victorian era began, Darwin pressed on with writing his Journal, and in August 1837 began correcting printer's proofs.
As Darwin worked under pressure, his health suffered. On 20 September, he had "an uncomfortable palpitation of the heart", so his doctors urged him to "knock off all work" and live in the country for a few weeks. After visiting Shrewsbury, he joined his Wedgwood relatives at Maer Hall, Staffordshire, but found them too eager for tales of his travels to give him much rest. His charming, intelligent, and cultured cousin Emma Wedgwood, nine months older than Darwin, was nursing his invalid aunt. His uncle Josiah pointed out an area of ground where cinders had disappeared under loam and suggested that this might have been the work of earthworms, inspiring "a new & important theory" on their role in soil formation, which Darwin presented at the Geological Society on 1 November 1837. His Journal was printed and ready for publication by the end of February 1838, as was the first volume of the Narrative, but FitzRoy was still working hard to finish his own volume.
William Whewell pushed Darwin to take on the duties of Secretary of the Geological Society. After initially declining the work, he accepted the post in March 1838. Despite the grind of writing and editing the Beagle reports, Darwin made remarkable progress on transmutation, taking every opportunity to question expert naturalists and, unconventionally, people with practical experience in selective breeding such as farmers and pigeon fanciers. Over time, his research drew on information from his relatives and children, the family butler, neighbours, colonists and former shipmates. He included mankind in his speculations from the outset, and on seeing an orangutan in the zoo on 28 March 1838 noted its childlike behaviour.
The strain took a toll, and by June he was being laid up for days on end with stomach problems, headaches and heart symptoms. For the rest of his life, he was repeatedly incapacitated with episodes of stomach pains, vomiting, severe boils, palpitations, trembling and other symptoms, particularly during times of stress, such as attending meetings or making social visits. The cause of Darwin's illness remained unknown, and attempts at treatment had only ephemeral success.
On 23 June, he took a break and went "geologising" in Scotland. He visited Glen Roy in glorious weather to see the parallel "roads" cut into the hillsides at three heights. He later published his view that these were marine-raised beaches, but then had to accept that they were shorelines of a proglacial lake.
Fully recuperated, he returned to Shrewsbury in July 1838. Used to jotting down daily notes on animal breeding, he scrawled rambling thoughts about marriage, career and prospects on two scraps of paper, one with columns headed "Marry" and "Not Marry". Advantages under "Marry" included "constant companion and a friend in old age ... better than a dog anyhow", against points such as "less money for books" and "terrible loss of time". Having decided in favour of marriage, he discussed it with his father, then went to visit his cousin Emma on 29 July. At this time he did not get around to proposing, but against his father's advice, he mentioned his ideas on transmutation.
He married Emma on 29 January 1839 and they were the parents of ten children, seven of whom survived to adulthood.
Malthus and natural selection
Continuing his research in London, Darwin's wide reading now included the sixth edition of Malthus's An Essay on the Principle of Population. On 28 September 1838, he noted its assertion that human "population, when unchecked, goes on doubling itself every twenty-five years, or increases in a geometrical ratio", a geometric progression so that population soon exceeds food supply in what is known as a Malthusian catastrophe. Darwin was well-prepared to compare this to Augustin de Candolle's "warring of the species" of plants and the struggle for existence among wildlife, explaining how numbers of a species kept roughly stable. As species always breed beyond available resources, favourable variations would make organisms better at surviving and passing the variations on to their offspring, while unfavourable variations would be lost. He wrote that the "final cause of all this wedging, must be to sort out proper structure, & adapt it to changes", so that "One may say there is a force like a hundred thousand wedges trying force into every kind of adapted structure into the gaps of in the economy of nature, or rather forming gaps by thrusting out weaker ones." This would result in the formation of new species. As he later wrote in his Autobiography:
By mid-December, Darwin saw a similarity between farmers picking the best stock in selective breeding, and a Malthusian Nature selecting from chance variants so that "every part of newly acquired structure is fully practical and perfected", thinking this comparison "a beautiful part of my theory". He later called his theory natural selection, an analogy with what he termed the "artificial selection" of selective breeding.
On 11 November, he returned to Maer and proposed to Emma, once more telling her his ideas. She accepted, then in exchanges of loving letters showed how she valued his openness in sharing their differences, while expressing her strong Unitarian beliefs and concerns that his honest doubts might separate them in the afterlife. While he was house-hunting in London, bouts of illness continued and Emma wrote urging him to get some rest, almost prophetically remarking "So don't be ill any more my dear Charley till I can be with you to nurse you." He found what they called "Macaw Cottage" (because of its gaudy interiors) in Gower Street, then moved his "museum" in over Christmas. On 24 January 1839, Darwin was elected a Fellow of the Royal Society (FRS).
On 29 January, Darwin and Emma Wedgwood were married at Maer in an Anglican ceremony arranged to suit the Unitarians, then immediately caught the train to London and their new home.
Geology books, barnacles, evolutionary research
Darwin now had the framework of his theory of natural selection "by which to work", as his "prime hobby". His research included extensive experimental selective breeding of plants and animals, finding evidence that species were not fixed and investigating many detailed ideas to refine and substantiate his theory. For fifteen years this work was in the background to his main occupation of writing on geology and publishing expert reports on the Beagle collections, in particular, the barnacles.
The impetus of Darwin's barnacle research came from a collection of a barnacle colony from Chile in 1835, which he dubbed Mr. Arthrobalanus. His confusion over the relationship of this species (Cryptophialus minutus) to other barnacles caused him to fixate on the systematics of the taxa. He wrote his first examination of the species in 1846 but did not formally describe it until 1854.
FitzRoy's long-delayed Narrative was published in May 1839. Darwin's Journal and Remarks got good reviews as the third volume, and on 15 August it was published on its own. Early in 1842, Darwin wrote about his ideas to Charles Lyell, who noted that his ally "denies seeing a beginning to each crop of species".
Darwin's book The Structure and Distribution of Coral Reefs on his theory of atoll formation was published in May 1842 after more than three years of work, and he then wrote his first "pencil sketch" of his theory of natural selection. To escape the pressures of London, the family moved to rural Down House in Kent in September. On 11 January 1844, Darwin mentioned his theorising to the botanist Joseph Dalton Hooker, writing with melodramatic humour "it is like confessing a murder". Hooker replied, "There may, in my opinion, have been a series of productions on different spots, & also a gradual change of species. I shall be delighted to hear how you think that this change may have taken place, as no presently conceived opinions satisfy me on the subject."
By July, Darwin had expanded his "sketch" into a 230-page "Essay", to be expanded with his research results if he died prematurely. In November, the anonymously published sensational best-seller Vestiges of the Natural History of Creation brought wide interest in transmutation. Darwin scorned its amateurish geology and zoology, but carefully reviewed his own arguments. Controversy erupted, and it continued to sell well despite contemptuous dismissal by scientists.
Darwin completed his third geological book in 1846. He now renewed a fascination and expertise in marine invertebrates, dating back to his student days with Grant, by dissecting and classifying the barnacles he had collected on the voyage, enjoying observing beautiful structures and thinking about comparisons with allied structures. In 1847, Hooker read the "Essay" and sent notes that provided Darwin with the calm critical feedback that he needed, but would not commit himself and questioned Darwin's opposition to continuing acts of creation.
In an attempt to improve his chronic ill health, Darwin went in 1849 to Dr. James Gully's Malvern spa and was surprised to find some benefit from hydrotherapy. Then, in 1851, his treasured daughter Annie fell ill, reawakening his fears that his illness might be hereditary. She died the same year after a long series of crises.
In eight years of work on barnacles, Darwin's theory helped him to find "homologies" showing that slightly changed body parts served different functions to meet new conditions, and in some genera he found minute males parasitic on hermaphrodites, showing an intermediate stage in evolution of distinct sexes. In 1853, it earned him the Royal Society's Royal Medal, and it made his reputation as a biologist. Upon the conclusion of his research, Darwin declared "I hate a barnacle as no man ever did before." In 1854, he became a Fellow of the Linnean Society of London, gaining postal access to its library. He began a major reassessment of his theory of species, and in November realised that divergence in the character of descendants could be explained by them becoming adapted to "diversified places in the economy of nature".
Publication of the theory of natural selection
By the start of 1856, Darwin was investigating whether eggs and seeds could survive travel across seawater to spread species across oceans. Hooker increasingly doubted the traditional view that species were fixed, but their young friend Thomas Henry Huxley was still firmly against the transmutation of species. Lyell was intrigued by Darwin's speculations without realising their extent. When he read a paper by Alfred Russel Wallace, "On the Law which has Regulated the Introduction of New Species", he saw similarities with Darwin's thoughts and urged him to publish to establish precedence.
Though Darwin saw no threat, on 14 May 1856 he began writing a short paper. Finding answers to difficult questions held him up repeatedly, and he expanded his plans to a "big book on species" titled Natural Selection, which was to include his "note on Man". He continued his research, obtaining information and specimens from naturalists worldwide, including Wallace who was working in Borneo.
In mid-1857, he added a section heading, "Theory applied to Races of Man", but did not add text on this topic. On 5 September 1857, Darwin sent the American botanist Asa Gray a detailed outline of his ideas, including an abstract of Natural Selection, which omitted human origins and sexual selection. In December, Darwin received a letter from Wallace asking if the book would examine human origins. He responded that he would avoid that subject, "so surrounded with prejudices", while encouraging Wallace's theorising and adding that "I go much further than you."
Darwin's book was only partly written when, on 18 June 1858, he received a paper from Wallace describing natural selection. Shocked that he had been "forestalled", Darwin sent it on that day to Lyell, as requested by Wallace, and although Wallace had not asked for publication, Darwin suggested he would send it to any journal that Wallace chose. His family was in crisis, with children in the village dying of scarlet fever, and he put matters in the hands of his friends. After some discussion, with no reliable way of involving Wallace, Lyell and Hooker decided on a joint presentation at the Linnean Society on 1 July of On the Tendency of Species to form Varieties; and on the Perpetuation of Varieties and Species by Natural Means of Selection. On the evening of 28 June, Darwin's baby son died of scarlet fever after almost a week of severe illness, and he was too distraught to attend.
There was little immediate attention to this announcement of the theory; the president of the Linnean Society remarked in May 1859 that the year had not been marked by any revolutionary discoveries. Only one review rankled enough for Darwin to recall it later; Professor Samuel Haughton of Dublin claimed that "all that was new in them was false, and what was true was old". Darwin struggled for thirteen months to produce an abstract of his "big book", suffering from ill health but getting constant encouragement from his scientific friends. Lyell arranged to have it published by John Murray.
On the Origin of Species proved unexpectedly popular, with the entire stock of 1,250 copies oversubscribed when it went on sale to booksellers on 22 November 1859. In the book, Darwin set out "one long argument" of detailed observations, inferences and consideration of anticipated objections. In making the case for common descent, he included evidence of homologies between humans and other mammals. Having outlined sexual selection, he hinted that it could explain differences between human races. He avoided explicit discussion of human origins, but implied the significance of his work with the sentence; "Light will be thrown on the origin of man and his history." His theory is simply stated in the introduction:
At the end of the book, he concluded that:
The last word was the only variant of "evolved" in the first five editions of the book. "Evolutionism" at that time was associated with other concepts, most commonly with embryological development. Darwin first used the word evolution in The Descent of Man in 1871, before adding it in 1872 to the 6th edition of The Origin of Species.
Responses to publication
The book aroused international interest, with less controversy than had greeted the popular and less scientific Vestiges of the Natural History of Creation. Though Darwin's illness kept him away from the public debates, he eagerly scrutinised the scientific response, commenting on press cuttings, reviews, articles, satires and caricatures, and corresponded on it with colleagues worldwide. The book did not explicitly discuss human origins, but included a number of hints about the animal ancestry of humans from which the inference could be made.
The first review asked, "If a monkey has become a man – what may not a man become?" It said this should be left to theologians as being too dangerous for ordinary readers. Among early favourable responses, Huxley's reviews swiped at Richard Owen, leader of the scientific establishment which Huxley was trying to overthrow.
In April, Owen's review attacked Darwin's friends and condescendingly dismissed his ideas, angering Darwin, but Owen and others began to promote ideas of supernaturally guided evolution. Patrick Matthew drew attention to his 1831 book which had a brief appendix suggesting a concept of natural selection leading to new species, but he had not developed the idea.
The Church of England's response was mixed. Darwin's old Cambridge tutors Sedgwick and Henslow dismissed the ideas, but liberal clergymen interpreted natural selection as an instrument of God's design, with the cleric Charles Kingsley seeing it as "just as noble a conception of Deity". In 1860, the publication of Essays and Reviews by seven liberal Anglican theologians diverted clerical attention from Darwin. Its ideas, including higher criticism, were attacked by church authorities as heresy. In it, Baden Powell argued that miracles broke God's laws, so belief in them was atheistic, and praised "Mr Darwin's masterly volume [supporting] the grand principle of the self-evolving powers of nature".
Asa Gray discussed teleology with Darwin, who imported and distributed Gray's pamphlet on theistic evolution, Natural Selection is not inconsistent with natural theology. The most famous confrontation was at the public 1860 Oxford evolution debate during a meeting of the British Association for the Advancement of Science, where the Bishop of Oxford Samuel Wilberforce, though not opposed to transmutation of species, argued against Darwin's explanation and human descent from apes. Joseph Hooker argued strongly for Darwin, and Thomas Huxley's legendary retort, that he would rather be descended from an ape than a man who misused his gifts, came to symbolise a triumph of science over religion.
Even Darwin's close friends Gray, Hooker, Huxley and Lyell still expressed various reservations but gave strong support, as did many others, particularly younger naturalists. Gray and Lyell sought reconciliation with faith, while Huxley portrayed a polarisation between religion and science. He campaigned pugnaciously against the authority of the clergy in education, aiming to overturn the dominance of clergymen and aristocratic amateurs under Owen in favour of a new generation of professional scientists. Owen's claim that brain anatomy proved humans to be a separate biological order from apes was shown to be false by Huxley in a long-running dispute parodied by Kingsley as the "Great Hippocampus Question", and discredited Owen.
In response to objections that the origin of life was unexplained, Darwin pointed to acceptance of Newton's law even though the cause of gravity was unknown. Despite criticisms and reservations related to this topic, he nevertheless proposed a prescient idea in an 1871 letter to Hooker in which he suggested the origin of life may have occurred in a "warm little pond".
Darwinism became a movement covering a wide range of evolutionary ideas. In 1863, Lyell's Geological Evidences of the Antiquity of Man popularised prehistory, though his caution on evolution disappointed Darwin. Weeks later Huxley's Evidence as to Man's Place in Nature showed that anatomically, humans are apes, then The Naturalist on the River Amazons by Henry Walter Bates provided empirical evidence of natural selection. Lobbying brought Darwin Britain's highest scientific honour, the Royal Society's Copley Medal, awarded on 3 November 1864. That day, Huxley held the first meeting of what became the influential "X Club" devoted to "science, pure and free, untrammelled by religious dogmas". By the end of the decade, most scientists agreed that evolution occurred, but only a minority supported Darwin's view that the chief mechanism was natural selection.
The Origin of Species was translated into many languages, becoming a staple scientific text attracting thoughtful attention from all walks of life, including the "working men" who flocked to Huxley's lectures. Darwin's theory resonated with various movements at the time and became a key fixture of popular culture. Cartoonists parodied animal ancestry in an old tradition of showing humans with animal traits, and in Britain, these droll images served to popularise Darwin's theory in an unthreatening way. While ill in 1862, Darwin began growing a beard, and when he reappeared in public in 1866, caricatures of him as an ape helped to identify all forms of evolutionism with Darwinism.
Othniel C. Marsh, America's first palaeontologist, was the first to provide solid fossil evidence to support Darwin's theory of evolution by unearthing the ancestors of the modern horse. In 1877, Marsh delivered a very influential speech before the annual meeting of the American Association for the Advancement of Science, providing a demonstrative argument for evolution. For the first time, Marsh traced the evolution of vertebrates from fish all the way through humans. Sparing no detail, he listed a wealth of fossil examples of past life forms. The significance of this speech was immediately recognized by the scientific community, and it was printed in its entirety in several scientific journals.
Descent of Man, sexual selection, and botany
Despite repeated bouts of illness during the last twenty-two years of his life, Darwin's work continued. Having published On the Origin of Species as an abstract of his theory, he pressed on with experiments, research, and writing of his "big book". He covered human descent from earlier animals, including the evolution of society and of mental abilities, as well as explaining decorative beauty in wildlife and diversifying into innovative plant studies.
Enquiries about insect pollination led in 1861 to novel studies of wild orchids, showing adaptation of their flowers to attract specific moths to each species and ensure cross fertilisation. In 1862 Fertilisation of Orchids gave his first detailed demonstration of the power of natural selection to explain complex ecological relationships, making testable predictions. As his health declined, he lay on his sickbed in a room filled with inventive experiments to trace the movements of climbing plants. Admiring visitors included Ernst Haeckel, a zealous proponent of Darwinism incorporating Lamarckism and Goethe's idealism. Wallace remained supportive, though he increasingly turned to Spiritualism.
Darwin's book The Variation of Animals and Plants Under Domestication (1868) was the first part of his planned "big book", and included his unsuccessful hypothesis of pangenesis attempting to explain heredity. It sold briskly at first, despite its size, and was translated into many languages. He wrote most of a second part, on natural selection, but it remained unpublished in his lifetime.
Lyell had already popularised human prehistory, and Huxley had shown that anatomically humans are apes. With The Descent of Man, and Selection in Relation to Sex published in 1871, Darwin set out evidence from numerous sources that humans are animals, showing continuity of physical and mental attributes, and presented sexual selection to explain impractical animal features such as the peacock's plumage as well as human evolution of culture, differences between sexes, and physical and cultural racial classification, while emphasising that humans are all one species. According to an editorial in Nature journal: "Although Charles Darwin opposed slavery and proposed that humans have a common ancestor, he also advocated a hierarchy of races, with white people higher than others."
His research using images was expanded in his 1872 book The Expression of the Emotions in Man and Animals, one of the first books to feature printed photographs, which discussed the evolution of human psychology and its continuity with the behaviour of animals. Both books proved very popular, and Darwin was impressed by the general assent with which his views had been received, remarking that "everybody is talking about it without being shocked." His conclusion was "that man with all his noble qualities, with sympathy which feels for the most debased, with benevolence which extends not only to other men but to the humblest living creature, with his god-like intellect which has penetrated into the movements and constitution of the solar systemwith all these exalted powersMan still bears in his bodily frame the indelible stamp of his lowly origin."
His evolution-related experiments and investigations led to books on Insectivorous Plants, The Effects of Cross and Self Fertilisation in the Vegetable Kingdom, different forms of flowers on plants of the same species, and The Power of Movement in Plants. He continued to collect information and exchange views from scientific correspondents all over the world, including Mary Treat, whom he encouraged to persevere in her scientific work. He was the first person to recognise the significance of carnivory in plants. His botanical work was interpreted and popularised by various writers including Grant Allen and H. G. Wells, and helped transform plant science in the late 19th century and early 20th century.
Death and funeral
In 1882, he was diagnosed with what was called "angina pectoris" which then meant coronary thrombosis and disease of the heart. At the time of his death, the physicians diagnosed "anginal attacks", and "heart-failure"; there has since been scholarly speculation about his life-long health issues.
He died at Down House on 19 April 1882. His last words were to his family, telling Emma, "I am not the least afraid of deathRemember what a good wife you have been to meTell all my children to remember how good they have been to me". While she rested, he repeatedly told Henrietta and Francis, "It's almost worthwhile to be sick to be nursed by you".
He had expected to be buried in St Mary's churchyard at Downe, but at the request of Darwin's colleagues, after public and parliamentary petitioning, William Spottiswoode (President of the Royal Society) arranged for Darwin to be honoured by burial in Westminster Abbey, close to John Herschel and Isaac Newton. The funeral, held on Wednesday 26 April, was attended by thousands of people, including family, friends, scientists, philosophers and dignitaries.
Children
The Darwins had ten children: two died in infancy, and Annie's death at the age of ten had a devastating effect on her parents. Charles was a devoted father and uncommonly attentive to his children. Whenever they fell ill, he feared that they might have inherited weaknesses from inbreeding due to the close family ties he shared with his wife and cousin, Emma Wedgwood. He examined inbreeding in his writings, contrasting it with the advantages of outcrossing in many species.
Charles Waring Darwin, born in December 1856, was the tenth and last of the children. Emma Darwin was aged 48 at the time of the birth, and the child was mentally subnormal and never learnt to walk or talk. He probably had Down syndrome, which had not then been medically described. The evidence is a photograph by William Erasmus Darwin of the infant and his mother, showing a characteristic head shape, and the family's observations of the child. Charles Waring died of scarlet fever on 28 June 1858, when Darwin wrote in his journal: "Poor dear Baby died."
Of his surviving children, George, Francis and Horace became Fellows of the Royal Society, distinguished as an astronomer, botanist and civil engineer, respectively. All three were knighted. Another son, Leonard, went on to be a soldier, politician, economist, eugenicist, and mentor of the statistician and evolutionary biologist Ronald Fisher.
Views and opinions
Religious views
Darwin's family tradition was nonconformist Unitarianism, while his father and grandfather were freethinkers, and his baptism and boarding school were Church of England. When going to Cambridge to become an Anglican clergyman, he did not "in the least doubt the strict and literal truth of every word in the Bible". He learned John Herschel's science which, like William Paley's natural theology, sought explanations in laws of nature rather than miracles and saw adaptation of species as evidence of design. On board HMS Beagle, Darwin was quite orthodox and would quote the Bible as an authority on morality. He looked for "centres of creation" to explain distribution, and suggested that the very similar antlions found in Australia and England were evidence of a divine hand.
Upon his return, he expressed a critical view of the Bible's historical accuracy and questioned the basis for considering one religion more valid than another. In the next few years, while intensively speculating on geology and the transmutation of species, he gave much thought to religion and openly discussed this with his wife Emma, whose beliefs similarly came from intensive study and questioning.
The theodicy of Paley and Thomas Malthus vindicated evils such as starvation as a result of a benevolent creator's laws, which had an overall good effect. To Darwin, natural selection produced the good of adaptation but removed the need for design, and he could not see the work of an omnipotent deity in all the pain and suffering, such as the ichneumon wasp paralysing caterpillars as live food for its eggs. Though he thought of religion as a tribal survival strategy, Darwin was reluctant to give up the idea of God as an ultimate lawgiver. He was increasingly troubled by the problem of evil.
Darwin remained close friends with the vicar of Downe, John Brodie Innes, and continued to play a leading part in the parish work of the church, but from would go for a walk on Sundays while his family attended church. He considered it "absurd to doubt that a man might be an ardent theist and an evolutionist" and, though reticent about his religious views, in 1879 he wrote that "I have never been an atheist in the sense of denying the existence of a God. – I think that generally ... an agnostic would be the most correct description of my state of mind".
The "Lady Hope Story", published in 1915, claimed that Darwin had reverted to Christianity on his sickbed. The claims were repudiated by Darwin's children and have been dismissed as false by historians.
Human society
Darwin's views on social and political issues reflected his time and social position. He grew up in a family of Whig reformers who, like his uncle Josiah Wedgwood, supported electoral reform and the emancipation of slaves. Darwin was passionately opposed to slavery, while seeing no problem with the working conditions of English factory workers or servants.
Taking taxidermy lessons in 1826 from the freed slave John Edmonstone, whom Darwin long recalled as "a very pleasant and intelligent man", reinforced his belief that black people shared the same feelings, and could be as intelligent as people of other races. He took the same attitude to native people he met on the Beagle voyage. Though commonplace in Britain at the time, Silliman and Bachman noticed the contrast with slave-owning America. Around twenty years later, racism became a feature of British society, but Darwin remained strongly against slavery, against "ranking the so-called races of man as distinct species", and against ill-treatment of native people.
Darwin's interaction with Yaghans (Fuegians) such as Jemmy Button during the second voyage of HMS Beagle had a profound impact on his view of indigenous peoples. At his arrival in Tierra del Fuego he made a colourful description of "Fuegian savages". This view changed as he came to know Yaghan people more in detail. By studying the Yaghans, Darwin concluded that a number of basic emotions by different human groups were the same and that mental capabilities were roughly the same as for Europeans. While interested in Yaghan culture, Darwin failed to appreciate their deep ecological knowledge and elaborate cosmology until the 1850s when he inspected a dictionary of Yaghan detailing 32,000 words. He saw that European colonisation would often lead to the extinction of native civilisations, and "tr[ied] to integrate colonialism into an evolutionary history of civilization analogous to natural history".
Darwin's view of women was that men's eminence over them was the outcome of sexual selection, a view disputed by Antoinette Brown Blackwell in her 1875 book The Sexes Throughout Nature.
Darwin was intrigued by his half-cousin Francis Galton's argument, introduced in 1865, that statistical analysis of heredity showed that moral and mental human traits could be inherited, and principles of animal breeding could apply to humans. In The Descent of Man, Darwin noted that aiding the weak to survive and have families could lose the benefits of natural selection, but cautioned that withholding such aid would endanger the instinct of sympathy, "the noblest part of our nature", and factors such as education could be more important. When Galton suggested that publishing research could encourage intermarriage within a "caste" of "those who are naturally gifted", Darwin foresaw practical difficulties and thought it "the sole feasible, yet I fear utopian, plan of procedure in improving the human race", preferring to simply publicise the importance of inheritance and leave decisions to individuals. Francis Galton named this field of study "eugenics" in 1883, after Darwin's death, and his theories were cited to promote eugenic policies.
Evolutionary social movements
Darwin's fame and popularity led to his name being associated with ideas and movements that, at times, had only an indirect relation to his writings, and sometimes went directly against his express comments.
Thomas Malthus had argued that population growth beyond resources was ordained by God to get humans to work productively and show restraint in getting families; this was used in the 1830s to justify workhouses and laissez-faire economics. Evolution was by then seen as having social implications, and Herbert Spencer's 1851 book Social Statics based ideas of human freedom and individual liberties on his Lamarckian evolutionary theory.
Soon after the Origin was published in 1859, critics derided his description of a struggle for existence as a Malthusian justification for the English industrial capitalism of the time. The term Darwinism was used for the evolutionary ideas of others, including Spencer's "survival of the fittest" as free-market progress, and Ernst Haeckel's polygenistic ideas of human development. Writers used natural selection to argue for various, often contradictory, ideologies such as laissez-faire dog-eat-dog capitalism, colonialism and imperialism. However, Darwin's holistic view of nature included "dependence of one being on another"; thus pacifists, socialists, liberal social reformers and anarchists such as Peter Kropotkin stressed the value of cooperation over struggle within a species. Darwin himself insisted that social policy should not simply be guided by concepts of struggle and selection in nature.
After the 1880s, a eugenics movement developed on ideas of biological inheritance, and for scientific justification of their ideas appealed to some concepts of Darwinism. In Britain, most shared Darwin's cautious views on voluntary improvement and sought to encourage those with good traits in "positive eugenics". During the "Eclipse of Darwinism", a scientific foundation for eugenics was provided by Mendelian genetics. Negative eugenics to remove the "feebleminded" were popular in America, Canada and Australia, and eugenics in the United States introduced compulsory sterilisation laws, followed by several other countries. Subsequently, Nazi eugenics brought the field into disrepute.
The term "Social Darwinism" was used infrequently from around the 1890s, but became popular as a derogatory term in the 1940s when used by Richard Hofstadter to attack the laissez-faire conservatism of those like William Graham Sumner who opposed reform and socialism. Since then, it has been used as a term of abuse by those opposed to what they think are the moral consequences of evolution.
Works
Darwin was a prolific writer. Even without the publication of his works on evolution, he would have had a considerable reputation as the author of The Voyage of the Beagle, as a geologist who had published extensively on South America and had solved the puzzle of the formation of coral atolls, and as a biologist who had published the definitive work on barnacles. While On the Origin of Species dominates perceptions of his work, The Descent of Man and The Expression of the Emotions in Man and Animals had considerable impact, and his books on plants including The Power of Movement in Plants were innovative studies of great importance, as was his final work on The Formation of Vegetable Mould through the Action of Worms.
Legacy and commemoration
As Alfred Russel Wallace put it, Darwin had "wrought a greater revolution in human thought within a quarter of a century than any man of our timeor perhaps any time", having "given us a new conception of the world of life, and a theory which is itself a powerful instrument of research; has shown us how to combine into one consistent whole the facts accumulated by all the separate classes of workers, and has thereby revolutionised the whole study of nature". The paleoanthropologist Trenton Holliday states that "Darwin is rightly considered to be the preeminent evolutionary scientist of all time".
By around 1880, most scientists were convinced of evolution as descent with modification, though few agreed with Darwin that natural selection "has been the main but not the exclusive means of modification". During "the eclipse of Darwinism" scientists explored alternative mechanisms. Then Ronald Fisher incorporated Mendelian genetics in The Genetical Theory of Natural Selection, leading to population genetics and the modern evolutionary synthesis, which continues to develop. Scientific discoveries have confirmed and validated Darwin's key insights.
Geographical features given his name include Darwin Sound and Mount Darwin, both named while he was on the Beagle voyage, and Darwin Harbour, named by his former shipmates on its next voyage, which eventually became the location of Darwin, the capital city of Australia's Northern Territory. Darwin's name was given, formally or informally, to numerous plants and animals, including many he had collected on the voyage. The Linnean Society of London began awards of the Darwin–Wallace Medal in 1908, to mark fifty years from the joint reading on 1 July 1858 of papers by Darwin and Wallace publishing their theory. Further awards were made in 1958 and 2008; since 2010, the awards have been annual. Darwin College, a postgraduate college at Cambridge University founded in 1964, is named after the Darwin family. From 2000 to 2017, UK £10 banknotes issued by the Bank of England featured Darwin's portrait printed on the reverse, along with a hummingbird and HMS Beagle.
See also
1991 Darwin
Creation (biographical drama film)
Creation–evolution controversy
European and American voyages of scientific exploration
History of biology
History of evolutionary thought
List of coupled cousins
List of multiple discoveries
Multiple discovery
Portraits of Charles Darwin
Tinamou egg
Universal Darwinism
Notes
. Robert FitzRoy was to become known after the voyage for biblical literalism, but at this time he had considerable interest in Lyell's ideas, and they met before the voyage when Lyell asked for observations to be made in South America. FitzRoy's diary during the ascent of the River Santa Cruz in Patagonia recorded his opinion that the plains were raised beaches, but on return, newly married to a very religious lady, he recanted these ideas.
. In the section "Morphology" of Chapter XIII of On the Origin of Species, Darwin commented on homologous bone patterns between humans and other mammals, writing: "What can be more curious than that the hand of a man, formed for grasping, that of a mole for digging, the leg of the horse, the paddle of the porpoise, and the wing of the bat, should all be constructed on the same pattern, and should include the same bones, in the same relative positions?" and in the concluding chapter: "The framework of bones being the same in the hand of a man, wing of a bat, fin of the porpoise, and leg of the horse … at once explain themselves on the theory of descent with slow and slight successive modifications."
.
In On the Origin of Species Darwin mentioned human origins in his concluding remark that "In the distant future I see open fields for far more important researches. Psychology will be based on a new foundation, that of the necessary acquirement of each mental power and capacity by gradation. Light will be thrown on the origin of man and his history."
In "Chapter VI: Difficulties on Theory" he referred to sexual selection: "I might have adduced for this same purpose the differences between the races of man, which are so strongly marked; I may add that some little light can apparently be thrown on the origin of these differences, chiefly through sexual selection of a particular kind, but without here entering on copious details my reasoning would appear frivolous."
In The Descent of Man of 1871, Darwin discussed the first passage:
"During many years I collected notes on the origin or descent of man, without any intention of publishing on the subject, but rather with the determination not to publish, as I thought that I should thus only add to the prejudices against my views. It seemed to me sufficient to indicate, in the first edition of my 'Origin of Species,' that by this work 'light would be thrown on the origin of man and his history;' and this implies that man must be included with other organic beings in any general conclusion respecting his manner of appearance on this earth." In a preface to the 1874 second edition, he added a reference to the second point: "it has been said by several critics, that when I found that many details of structure in man could not be explained through natural selection, I invented sexual selection; I gave, however, a tolerably clear sketch of this principle in the first edition of the 'Origin of Species,' and I there stated that it was applicable to man."
. See, for example, WILLA volume 4, Charlotte Perkins Gilman and the Feminization of Education by Deborah M. De Simone: "Gilman shared many basic educational ideas with the generation of thinkers who matured during the period of "intellectual chaos" caused by Darwin's Origin of the Species. Marked by the belief that individuals can direct human and social evolution, many progressives came to view education as the panacea for advancing social progress and for solving such problems as urbanisation, poverty, or immigration."
. See, for example, the song "A lady fair of lineage high" from Gilbert and Sullivan's Princess Ida, which describes the descent of man (but not woman!) from apes.
. Darwin's belief that black people had the same essential humanity as Europeans, and had many mental similarities, was reinforced by the lessons he had from John Edmonstone in 1826. Early in the Beagle voyage, Darwin nearly lost his position on the ship when he criticised FitzRoy's defence and praise of slavery. He wrote home about "how steadily the general feeling, as shown at elections, has been rising against Slavery. What a proud thing for England if she is the first European nation which utterly abolishes it! I was told before leaving England that after living in slave countries all my opinions would be altered; the only alteration I am aware of is forming a much higher estimate of the negro character." Regarding Fuegians, he "could not have believed how wide was the difference between savage and civilized man: it is greater than between a wild and domesticated animal, inasmuch as in man there is a greater power of improvement", but he knew and liked civilised Fuegians like Jemmy Button: "It seems yet wonderful to me, when I think over all his many good qualities, that he should have been of the same race, and doubtless partaken of the same character, with the miserable, degraded savages whom we first met here."
In the Descent of Man, he mentioned the similarity of Fuegians' and Edmonstone's minds to Europeans' when arguing against "ranking the so-called races of man as distinct species".
He rejected the ill-treatment of native people, and for example wrote of massacres of Patagonian men, women, and children, "Every one here is fully convinced that this is the most just war, because it is against barbarians. Who would believe in this age that such atrocities could be committed in a Christian civilized country?"
. Geneticists studied human heredity as Mendelian inheritance, while eugenics movements sought to manage society, with a focus on social class in the United Kingdom, and on disability and ethnicity in the United States, leading to geneticists seeing this movement as impractical pseudoscience. A shift from voluntary arrangements to "negative" eugenics included compulsory sterilisation laws in the United States, copied by Nazi Germany as the basis for Nazi eugenics based on virulent racism and "racial hygiene".( )
. David Quammen writes of his "theory that [Darwin] turned to these arcane botanical studies – producing more than one book that was solidly empirical, discreetly evolutionary, yet a 'horrid bore' – at least partly so that the clamorous controversialists, fighting about apes and angels and souls, would leave him... alone". David Quammen, "The Brilliant Plodder" (review of Ken Thompson, Darwin's Most Wonderful Plants: A Tour of His Botanical Legacy, University of Chicago Press, 255 pp.; Elizabeth Hennessy, On the Backs of Tortoises: Darwin, the Galápagos, and the Fate of an Evolutionary Eden, Yale University Press, 310 pp.; Bill Jenkins, Evolution Before Darwin: Theories of the Transmutation of Species in Edinburgh, 1804–1834, Edinburgh University Press, 222 pp.), The New York Review of Books, vol. LXVII, no. 7 (23 April 2020), pp. 22–24. Quammen, quoted from p. 24 of his review.
Citations
Bibliography
External links
The Complete Works of Charles Darwin Online – Darwin Online; Darwin's publications, private papers and bibliography, supplementary works including biographies, obituaries and reviews
Darwin Correspondence Project Full text and notes for complete correspondence to 1867, with summaries of all the rest, and pages of commentary
Darwin Manuscript Project
View books owned and annotated by Charles Darwin at the online Biodiversity Heritage Library.
Digitised Darwin Manuscripts in Cambridge Digital Library
Charles Darwin in the British horticultural press – Occasional Papers from RHS Lindley Library, volume 3 July 2010
Scientific American, 29 April 1882, pp. 256, Obituary of Charles Darwin
1809 births
1882 deaths
19th-century British biologists
19th-century English writers
19th-century Anglicans
19th-century English naturalists
19th-century British geologists
Alumni of Christ's College, Cambridge
Alumni of the University of Edinburgh
Botanists with author abbreviations
British carcinologists
Burials at Westminster Abbey
Circumnavigators of the globe
Coleopterists
Darwin–Wedgwood family
Deaths from coronary thrombosis
English abolitionists
English agnostics
English Anglicans
English entomologists
English geologists
English justices of the peace
English sceptics
English travel writers
Ethologists
British evolutionary biologists
Fellows of the Linnean Society of London
Fellows of the Royal Entomological Society
Fellows of the Royal Geographical Society
Fellows of the Royal Society
Fellows of the Zoological Society of London
Human evolution theorists
Human evolution
Independent scientists
Members of the American Philosophical Society
Members of the Lincean Academy
Members of the Royal Academy of Belgium
Members of the Royal Netherlands Academy of Arts and Sciences
Members of the Royal Swedish Academy of Sciences
People educated at Shrewsbury School
Scientists from Shrewsbury
Recipients of the Copley Medal
Recipients of the Pour le Mérite (civil class)
Royal Medal winners
Theoretical biologists
Wollaston Medal winners | Charles Darwin | [
"Biology"
] | 13,590 | [
"Behavior",
"Ethologists",
"Bioinformatics",
"Theoretical biologists",
"Ethology"
] |
12,795,419 | https://en.wikipedia.org/wiki/Laplace%20principle%20%28large%20deviations%20theory%29 | In mathematics, Laplace's principle is a basic theorem in large deviations theory which is similar to Varadhan's lemma. It gives an asymptotic expression for the Lebesgue integral of exp(−θφ(x)) over a fixed set A as θ becomes large. Such expressions can be used, for example, in statistical mechanics to determining the limiting behaviour of a system as the temperature tends to absolute zero.
Statement of the result
Let A be a Lebesgue-measurable subset of d-dimensional Euclidean space Rd and let φ : Rd → R be a measurable function with
Then
where ess inf denotes the essential infimum. Heuristically, this may be read as saying that for large θ,
Application
The Laplace principle can be applied to the family of probability measures Pθ given by
to give an asymptotic expression for the probability of some event A as θ becomes large. For example, if X is a standard normally distributed random variable on R, then
for every measurable set A.
See also
Laplace's method
References
Asymptotic analysis
Large deviations theory
Probability theorems
Statistical mechanics
Mathematical principles
Theorems in analysis | Laplace principle (large deviations theory) | [
"Physics",
"Mathematics"
] | 247 | [
"Statistical mechanics stubs",
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical principles",
"Mathematical theorems",
"Theorems in probability theory",
"Asymptotic analysis",
"Statistical mechanics",
"Mathematical problems"
] |
19,568,001 | https://en.wikipedia.org/wiki/Cauchy%20elastic%20material | In physics, a Cauchy-elastic material is one in which the stress at each point is determined only by the current state of deformation with respect to an arbitrary reference configuration. A Cauchy-elastic material is also called a simple elastic material.
It follows from this definition that the stress in a Cauchy-elastic material does not depend on the path of deformation or the history of deformation, or on the time taken to achieve that deformation or the rate at which the state of deformation is reached. The definition also implies that the constitutive equations are spatially local; that is, the stress is only affected by the state of deformation in an infinitesimal neighborhood of the point in question, without regard for the deformation or motion of the rest of the material. It also implies that body forces (such as gravity), and inertial forces cannot affect the properties of the material. Finally, a Cauchy-elastic material must satisfy the requirements of material objectivity.
Cauchy-elastic materials are mathematical abstractions, and no real material fits this definition perfectly. However, many elastic materials of practical interest, such as steel, plastic, wood and concrete, can often be assumed to be Cauchy-elastic for the purposes of stress analysis.
Mathematical definition
Formally, a material is said to be Cauchy-elastic if the Cauchy stress tensor is a function of the strain tensor (deformation gradient) alone:
This definition assumes that the effect of temperature can be ignored, and the body is homogeneous. This is the constitutive equation for a Cauchy-elastic material.
Note that the function depends on the choice of reference configuration. Typically, the reference configuration is taken as the relaxed (zero-stress) configuration, but need not be.
Material frame-indifference requires that the constitutive relation should not change when the location of the observer changes. Therefore the constitutive equation for another arbitrary observer can be written . Knowing that the Cauchy stress tensor and the deformation gradient are objective quantities, one can write:
where is a proper orthogonal tensor.
The above is a condition that the constitutive law has to respect to make sure that the response of the material will be independent of the observer. Similar conditions can be derived for constitutive laws relating the deformation gradient to the first or second Piola-Kirchhoff stress tensor.
Isotropic Cauchy-elastic materials
For an isotropic material the Cauchy stress tensor can be expressed as a function of the left Cauchy-Green tensor . The constitutive equation may then be written:
In order to find the restriction on which will ensure the principle of material frame-indifference, one can write:
A constitutive equation that respects the above condition is said to be isotropic.
Non-conservative materials
Even though the stress in a Cauchy-elastic material depends only on the state of deformation, the work done by stresses may depend on the path of deformation. Therefore a Cauchy elastic material in general has a non-conservative structure, and the stress cannot necessarily be derived from a scalar "elastic potential" function. Materials that are conservative in this sense are called hyperelastic or "Green-elastic".
References
Continuum mechanics
Elasticity (physics) | Cauchy elastic material | [
"Physics",
"Materials_science"
] | 674 | [
"Physical phenomena",
"Elasticity (physics)",
"Continuum mechanics",
"Deformation (mechanics)",
"Classical mechanics",
"Physical properties"
] |
19,570,879 | https://en.wikipedia.org/wiki/Piston%20effect | Piston effect refers to the forced-air flow inside a tunnel or shaft caused by moving vehicles. It is one of numerous phenomena that engineers and designers must consider when developing a range of structures.
Cause
In open air, when a vehicle travels along, air pushed aside can move in any direction except into the ground. Inside a tunnel, air is confined by the tunnel walls to move along the tunnel. Behind the moving vehicle, as air has been pushed away, suction is created, and air is pulled to flow into the tunnel. In addition, because of fluid viscosity, the surface of the vehicle drags the air to flow with vehicle, a force experienced as skin drag by the vehicle. This movement of air by the vehicle is analogous to the operation of a mechanical piston as inside a reciprocating compressor gas pump, hence the name "piston effect". The effect is also similar to the pressure fluctuations inside drainage pipes as waste water pushes air in front of it.
The piston effect is very pronounced in railway tunnels, because the cross sectional area of trains is large and in many cases almost completely fills the tunnel cross section. The wind felt by the passengers on underground railway platforms (that do not have platform screen doors installed) when a train is approaching is air flow from the piston effect. The effect is less pronounced in road vehicle tunnels, as the cross-sectional area of vehicle is small compared to the total cross-sectional area of the tunnel. Single track tunnels experience the maximum effect but clearance between rolling stock and the tunnel as well as the shape of the front of the train affect its strength.
Air flow caused by the piston effect can exert large forces on the installations inside the tunnel and so these installations have to be carefully designed and installed properly. Non-return dampers are sometimes needed to prevent stalling of ventilation fans caused by this air flow.
Applications
The piston effect has to be considered by building designers in relation to smoke movement within an elevator shaft. A moving elevator car forces the air in front of it out of the shaft and pulls air into the shaft behind it with the effect most apparent in elevator systems with a fast moving car in a single shaft. This means that in a fire a moving elevator may push smoke into lower floors.
The piston effect is used in tunnel ventilation. In railway tunnels, the train pushes out the air in front of it toward the closest ventilation shaft in front, and sucks air into the tunnel from the closest ventilation shaft behind it. The piston effect can also assist ventilation in road vehicle tunnels.
In underground rapid transit systems, the piston effect contributes to ventilation and in some cases provides enough air movement to make mechanical ventilation unnecessary. At wider stations with multiple tracks, air quality remains the same and can even improve when mechanical ventilation is disabled. At narrow platforms with a single tunnel, however, air quality worsens when relying on the piston effect alone for ventilation. This still allows for potential energy savings by taking advantage of the piston effect rather than mechanical ventilation where possible.
Tunnel boom
Tunnel boom is a loud boom sometimes generated by high-speed trains when they exit tunnels. These shock waves can disturb nearby residents and damage trains and nearby structures. People perceive this sound similarly to that of a sonic boom from supersonic aircraft. However, unlike a sonic boom, tunnel boom is not caused by trains exceeding the speed of sound. Instead, tunnel boom results from the structure of the tunnel preventing the air around the train from escaping in all directions. As a train passes through a tunnel, it creates compression waves in front of it. These waves coalesce into a shock wave that generates a loud boom when it reaches the tunnel exit. The strength of this wave is proportional to the cube of the train's speed, so the effect is much more pronounced with faster trains.
Tunnel boom can disturb residents near the mouths of tunnels, and it is exacerbated in mountain valleys where the sound echoes. Reducing these disturbances is a significant challenge for high-speed lines such as Japan's Shinkansen, France's TGV and Spain's AVE. Tunnel boom has become a principal limitation to increased train speeds in Japan where the mountainous terrain requires frequent tunnels. Japan has enacted a law limiting noise to 70 dB in residential areas, which include many tunnel exit zones.
Methods of reducing tunnel boom include making the train's profile highly aerodynamic, adding hoods to tunnel entrances, installing perforated walls at tunnel exits, and drilling vent holes in the tunnel (similar to fitting a silencer on a firearm, but on a far bigger scale). The HS2 project in the United Kingdom has developed "porous portal" tunnel hoods to mitigate tunnel boom for residents, as well as minimising aural discomfort for passengers that could arise from in-train air pressure changes.
Ear discomfort
Passengers and crew may experience ear discomfort as a train enters a tunnel because of rapid pressure changes.
See also
Plumbing drainage venting
Footnotes
References
Pistone
External links
Tunnel Boom by an AVE train in Buñol, Spain
Enhancing the piston effect in underground railway tunnels
Piston Effect Simulation Using Ansys CFX
Railway tunnels
Tunnels
Physical phenomena | Piston effect | [
"Physics"
] | 1,037 | [
"Physical phenomena"
] |
19,571,465 | https://en.wikipedia.org/wiki/Weakly%20symmetric%20space | In mathematics, a weakly symmetric space is a notion introduced by the Norwegian mathematician Atle Selberg in the 1950s as a generalisation of symmetric space, due to Élie Cartan. Geometrically the spaces are defined as complete Riemannian manifolds such that any two points can be exchanged by an isometry, the symmetric case being when the isometry is required to have period two. The classification of weakly symmetric spaces relies on that of periodic automorphisms of complex semisimple Lie algebras. They provide examples of Gelfand pairs, although the corresponding theory of spherical functions in harmonic analysis, known for symmetric spaces, has not yet been developed.
References
Differential geometry
Riemannian geometry
Lie groups
Homogeneous spaces
Harmonic analysis | Weakly symmetric space | [
"Physics",
"Mathematics"
] | 147 | [
"Lie groups",
"Mathematical structures",
"Group actions",
"Homogeneous spaces",
"Space (mathematics)",
"Topological spaces",
"Algebraic structures",
"Geometry",
"Symmetry"
] |
19,572,035 | https://en.wikipedia.org/wiki/Innovative%20Interstellar%20Explorer | Innovative Interstellar Explorer was a NASA "Vision Mission" study funded by NASA following a proposal under NRA-03-OSS-01 on 11 September 2003. This study focused on measuring the interstellar medium, the region outside the influence of the nearest star, the Sun. It proposes to use a radioisotope thermal generator to power ion thrusters.
The project is a study of a proposed interstellar precursor mission that would probe the nearby interstellar medium and measure the properties of magnetic fields and cosmic rays and their effects on a spacecraft leaving the Solar System. Mission launch plans analyzed direct, one planet, multi-planet, and upper-stage trades. As a concept study, a number of technologies, configurations, and mission goals were considered, leading to the choice of a spacecraft propelled with ion engines powered by a radioisotope thermoelectric generator (RTG). The focus was getting a spacecraft launched by about 2014, achieving 200 AU by the year 2031.
A variety of strategies were assessed, including using launch windows (not counting backups) for a Jupiter assist in 2014, 2026, 2038, and 2050about every 12 years.
The launch opportunity for the 2014 window passed, but for example it could have resulted in a Jupiter flyby by early 2016 and then go on to reach 200 astronomical units (AU) by 2044. With an ion drive, a speed of about 7.9 AU per year could be attained by the time its xenon propellant was depleted, enabling a travel distance of 200 AU by 2044 and perhaps 1000 AU after one hundred years from launch. Different launch times and configurations have various timelines and options. One configuration for launch saw the use of a Delta IV Heavy and for the upper stages a stack of Star 48 and Star 37 leading to various gravity assist options. Another launch stack that was considered was the Atlas V 551 with a Star 48.
In 2011, the study's primary author gave an update to website Centauri Dreams, giving a retrospective on the mission and its feasibility since its publication in 2003. By that time, some of the earliest launch windows were no longer feasible without a ready spacecraft. Some retrospectives were the advantages and potential of solar sails, but the need for them to be more advanced for a mission, and also the utility of a radioisotope propulsion (REP) for such a mission. REP was the combination of using an RTG to power an ion drive.
See also
Applied Physics Laboratory
Interstellar probe
New Horizons 2
New Horizons (Pluto flyby 2015, now heading out to KBOs)
TAU (spacecraft)
References
External links
Interstellar Explorer – NASA
Presentation on IIE
Hypothetical spacecraft
Interstellar travel
Proposed space probes | Innovative Interstellar Explorer | [
"Astronomy",
"Technology"
] | 556 | [
"Hypothetical spacecraft",
"Astronomical hypotheses",
"Interstellar travel",
"Exploratory engineering"
] |
19,572,217 | https://en.wikipedia.org/wiki/Influenza | Influenza, commonly known as the flu, is an infectious disease caused by influenza viruses. Symptoms range from mild to severe and often include fever, runny nose, sore throat, muscle pain, headache, coughing, and fatigue. These symptoms begin one to four (typically two) days after exposure to the virus and last for about two to eight days. Diarrhea and vomiting can occur, particularly in children. Influenza may progress to pneumonia from the virus or a subsequent bacterial infection. Other complications include acute respiratory distress syndrome, meningitis, encephalitis, and worsening of pre-existing health problems such as asthma and cardiovascular disease.
There are four types of influenza virus: types A, B, C, and D. Aquatic birds are the primary source of influenza A virus (IAV), which is also widespread in various mammals, including humans and pigs. Influenza B virus (IBV) and influenza C virus (ICV) primarily infect humans, and influenza D virus (IDV) is found in cattle and pigs. Influenza A virus and influenza B virus circulate in humans and cause seasonal epidemics, and influenza C virus causes a mild infection, primarily in children. Influenza D virus can infect humans but is not known to cause illness. In humans, influenza viruses are primarily transmitted through respiratory droplets from coughing and sneezing. Transmission through aerosols and surfaces contaminated by the virus also occur.
Frequent hand washing and covering one's mouth and nose when coughing and sneezing reduce transmission, as does wearing a mask. Annual vaccination can help to provide protection against influenza. Influenza viruses, particularly influenza A virus, evolve quickly, so flu vaccines are updated regularly to match which influenza strains are in circulation. Vaccines provide protection against influenza A virus subtypes H1N1 and H3N2 and one or two influenza B virus subtypes. Influenza infection is diagnosed with laboratory methods such as antibody or antigen tests and a polymerase chain reaction (PCR) to identify viral nucleic acid. The disease can be treated with supportive measures and, in severe cases, with antiviral drugs such as oseltamivir. In healthy individuals, influenza is typically self-limiting and rarely fatal, but it can be deadly in high-risk groups.
In a typical year, five to 15 percent of the population contracts influenza. There are 3 to 5 million severe cases annually, with up to 650,000 respiratory-related deaths globally each year. Deaths most commonly occur in high-risk groups, including young children, the elderly, and people with chronic health conditions. In temperate regions, the number of influenza cases peaks during winter, whereas in the tropics, influenza can occur year-round. Since the late 1800s, pandemic outbreaks of novel influenza strains have occurred every 10 to 50 years. Five flu pandemics have occurred since 1900: the Spanish flu from 1918 to 1920, which was the most severe; the Asian flu in 1957; the Hong Kong flu in 1968; the Russian flu in 1977; and the swine flu pandemic in 2009.
Signs and symptoms
The symptoms of influenza are similar to those of a cold, although usually more severe and less likely to include a runny nose. The time between exposure to the virus and development of symptoms (the incubation period) is one to four days, most commonly one to two days. Many infections are asymptomatic. The onset of symptoms is sudden, and initial symptoms are predominately non-specific, including fever, chills, headaches, muscle pain, malaise, loss of appetite, lack of energy, and confusion. These are usually accompanied by respiratory symptoms such as a dry cough, sore or dry throat, hoarse voice, and a stuffy or runny nose. Coughing is the most common symptom. Gastrointestinal symptoms may also occur, including nausea, vomiting, diarrhea, and gastroenteritis, especially in children. The standard influenza symptoms typically last for two to eight days. Some studies suggest influenza can cause long-lasting symptoms in a similar way to long COVID.
Symptomatic infections are usually mild and limited to the upper respiratory tract, but progression to pneumonia is relatively common. Pneumonia may be caused by the primary viral infection or a secondary bacterial infection. Primary pneumonia is characterized by rapid progression of fever, cough, labored breathing, and low oxygen levels that cause bluish skin. It is especially common among those who have an underlying cardiovascular disease such as rheumatic heart disease. Secondary pneumonia typically has a period of improvement in symptoms for one to three weeks followed by recurrent fever, sputum production, and fluid buildup in the lungs, but can also occur just a few days after influenza symptoms appear. About a third of primary pneumonia cases are followed by secondary pneumonia, which is most frequently caused by the bacteria Streptococcus pneumoniae and Staphylococcus aureus.
Virology
Types of virus
Influenza viruses comprise four species, each the sole member of its own genus. The four influenza genera comprise four of the seven genera in the family Orthomyxoviridae. They are:
Influenza A virus, genus Alphainfluenzavirus
Influenza B virus, genus Betainfluenzavirus
Influenza C virus, genus Gammainfluenzavirus
Influenza D virus, genus Deltainfluenzavirus
Influenza A virus is responsible for most cases of severe illness as well as seasonal epidemics and occasional pandemics. It infects people of all ages but tends to disproportionately cause severe illness in the elderly, the very young, and those with chronic health issues. Birds are the primary reservoir of influenza A virus, especially aquatic birds such as ducks, geese, shorebirds, and gulls, but the virus also circulates among mammals, including pigs, horses, and marine mammals.
Subtypes of Influenza A are defined by the combination of the antigenic viral proteins haemagglutinin (H) and neuraminidase (N) in the viral envelope; for example, "H1N1" designates an IAV subtype that has a type-1 hemagglutinin (H) protein and a type-1 neuraminidase (N) protein. Almost all possible combinations of H (1 thru 16) and N (1 thru 11) have been isolated from wild birds. In addition H17, H18, N10 and N11 have been found in bats. The influenza A virus subtypes in circulation among humans are H1N1 and H3N2.
Influenza B virus mainly infects humans but has been identified in seals, horses, dogs, and pigs. Influenza B virus does not have subtypes like influenza A virus but has two antigenically distinct lineages, termed the B/Victoria/2/1987-like and B/Yamagata/16/1988-like lineages, or simply (B/)Victoria(-like) and (B/)Yamagata(-like). Both lineages are in circulation in humans, disproportionately affecting children. However, the B/Yamagata lineage might have become extinct in 2020/2021 due to COVID-19 pandemic measures. Influenza B viruses contribute to seasonal epidemics alongside influenza A viruses but have never been associated with a pandemic.
Influenza C virus, like influenza B virus, is primarily found in humans, though it has been detected in pigs, feral dogs, dromedary camels, cattle, and dogs. Influenza C virus infection primarily affects children and is usually asymptomatic or has mild cold-like symptoms, though more severe symptoms such as gastroenteritis and pneumonia can occur. Unlike influenza A virus and influenza B virus, influenza C virus has not been a major focus of research pertaining to antiviral drugs, vaccines, and other measures against influenza. Influenza C virus is subclassified into six genetic/antigenic lineages.
Influenza D virus has been isolated from pigs and cattle, the latter being the natural reservoir. Infection has also been observed in humans, horses, dromedary camels, and small ruminants such as goats and sheep. Influenza D virus is distantly related to influenza C virus. While cattle workers have occasionally tested positive to prior influenza D virus infection, it is not known to cause disease in humans. Influenza C virus and influenza D virus experience a slower rate of antigenic evolution than influenza A virus and influenza B virus. Because of this antigenic stability, relatively few novel lineages emerge.
Influenza virus nomenclature
Every year, millions of influenza virus samples are analysed to monitor changes in the virus' antigenic properties, and to inform the development of vaccines.
To unambiguously describe a specific isolate of virus, researchers use the internationally accepted influenza virus nomenclature, which describes, among other things, the species of animal from which the virus was isolated, and the place and year of collection. As an example – "A/chicken/Nakorn-Patom/Thailand/CU-K2/04(H5N1)":
"A" stands for the genus of influenza (A, B, C or D).
"chicken" is the animal species the isolate was found in (note: human isolates lack this component term and are thus identified as human isolates by default)
"Nakorn-Patom/Thailand" is the place this specific virus was isolated
"CU-K2" is the laboratory reference number that identifies it from other influenza viruses isolated at the same place and year
"04" represents the year of isolation 2004
"H5" stands for the fifth of several known types of the protein hemagglutinin.
"N1" stands for the first of several known types of the protein neuraminidase.
The nomenclature for influenza B, C and D, which are less variable, is simpler. Examples are B/Santiago/29615/2020 and C/Minnesota/10/2015.
Genome and structure
Influenza viruses have a negative-sense, single-stranded RNA genome that is segmented. The negative sense of the genome means it can be used as a template to synthesize messenger RNA (mRNA). Influenza A virus and influenza B virus have eight genome segments that encode 10 major proteins. Influenza C virus and influenza D virus have seven genome segments that encode nine major proteins.
Three segments encode three subunits of an RNA-dependent RNA polymerase (RdRp) complex: PB1, a transcriptase, PB2, which recognizes 5' caps, and PA (P3 for influenza C virus and influenza D virus), an endonuclease. The M1 matrix protein and M2 proton channel share a segment, as do the non-structural protein (NS1) and the nuclear export protein (NEP). For influenza A virus and influenza B virus, hemagglutinin (HA) and neuraminidase (NA) are encoded on one segment each, whereas influenza C virus and influenza D virus encode a hemagglutinin-esterase fusion (HEF) protein on one segment that merges the functions of HA and NA. The final genome segment encodes the viral nucleoprotein (NP). Influenza viruses also encode various accessory proteins, such as PB1-F2 and PA-X, that are expressed through alternative open reading frames and which are important in host defense suppression, virulence, and pathogenicity.
The virus particle, called a virion, is pleomorphic and varies between being filamentous, bacilliform, or spherical in shape. Clinical isolates tend to be pleomorphic, whereas strains adapted to laboratory growth typically produce spherical virions. Filamentous virions are about 250 nanometers (nm) by 80 nm, bacilliform 120–250 by 95 nm, and spherical 120 nm in diameter.
The core of the virion comprises one copy of each segment of the genome bound to NP nucleoproteins in separate ribonucleoprotein (RNP) complexes for each segment. There is a copy of the RdRp, all subunits included, bound to each RNP. The genetic material is encapsulated by a layer of M1 matrix protein which provides structural reinforcement to the outer layer, the viral envelope. The envelope comprises a lipid bilayer membrane incorporating HA and NA (or HEF) proteins extending outward from its exterior surface. HA and HEF proteins have a distinct "head" and "stalk" structure. M2 proteins form proton channels through the viral envelope that are required for viral entry and exit. Influenza B viruses contain a surface protein named NB that is anchored in the envelope, but its function is unknown.
Life cycle
The viral life cycle begins by binding to a target cell. Binding is mediated by the viral HA proteins on the surface of the envelope, which bind to cells that contain sialic acid receptors on the surface of the cell membrane. For N1 subtypes with the "G147R" mutation and N2 subtypes, the NA protein can initiate entry. Prior to binding, NA proteins promote access to target cells by degrading mucus, which helps to remove extracellular decoy receptors that would impede access to target cells. After binding, the virus is internalized into the cell by an endosome that contains the virion inside it. The endosome is acidified by cellular vATPase to have lower pH, which triggers a conformational change in HA that allows fusion of the viral envelope with the endosomal membrane. At the same time, hydrogen ions diffuse into the virion through M2 ion channels, disrupting internal protein-protein interactions to release RNPs into the host cell's cytosol. The M1 protein shell surrounding RNPs is degraded, fully uncoating RNPs in the cytosol.
RNPs are then imported into the nucleus with the help of viral localization signals. There, the viral RNA polymerase transcribes mRNA using the genomic negative-sense strand as a template. The polymerase snatches 5' caps for viral mRNA from cellular RNA to prime mRNA synthesis and the 3'-end of mRNA is polyadenylated at the end of transcription. Once viral mRNA is transcribed, it is exported out of the nucleus and translated by host ribosomes in a cap-dependent manner to synthesize viral proteins. RdRp also synthesizes complementary positive-sense strands of the viral genome in a complementary RNP complex which are then used as templates by viral polymerases to synthesize copies of the negative-sense genome. During these processes, RdRps of avian influenza viruses (AIVs) function optimally at a higher temperature than mammalian influenza viruses.
Newly synthesized viral polymerase subunits and NP proteins are imported to the nucleus to further increase the rate of viral replication and form RNPs. HA, NA, and M2 proteins are trafficked with the aid of M1 and NEP proteins to the cell membrane through the Golgi apparatus and inserted into the cell's membrane. Viral non-structural proteins including NS1, PB1-F2, and PA-X regulate host cellular processes to disable antiviral responses. PB1-F2 also interacts with PB1 to keep polymerases in the nucleus longer. M1 and NEP proteins localize to the nucleus during the later stages of infection, bind to viral RNPs and mediate their export to the cytoplasm where they migrate to the cell membrane with the aid of recycled endosomes and are bundled into the segments of the genome.
Progeny viruses leave the cell by budding from the cell membrane, which is initiated by the accumulation of M1 proteins at the cytoplasmic side of the membrane. The viral genome is incorporated inside a viral envelope derived from portions of the cell membrane that have HA, NA, and M2 proteins. At the end of budding, HA proteins remain attached to cellular sialic acid until they are cleaved by the sialidase activity of NA proteins. The virion is then released from the cell. The sialidase activity of NA also cleaves any sialic acid residues from the viral surface, which helps prevent newly assembled viruses from aggregating near the cell surface and improving infectivity. Similar to other aspects of influenza replication, optimal NA activity is temperature- and pH-dependent. Ultimately, presence of large quantities of viral RNA in the cell triggers apoptosis (programmed cell death), which is initiated by cellular factors to restrict viral replication.
Antigenic drift and shift
Two key processes that influenza viruses evolve through are antigenic drift and antigenic shift. Antigenic drift is when an influenza virus' antigens change due to the gradual accumulation of mutations in the antigen's (HA or NA) gene. This can occur in response to evolutionary pressure exerted by the host immune response. Antigenic drift is especially common for the HA protein, in which just a few amino acid changes in the head region can constitute antigenic drift. The result is the production of novel strains that can evade pre-existing antibody-mediated immunity. Antigenic drift occurs in all influenza species but is slower in B than A and slowest in C and D. Antigenic drift is a major cause of seasonal influenza, and requires that flu vaccines be updated annually. HA is the main component of inactivated vaccines, so surveillance monitors antigenic drift of this antigen among circulating strains. Antigenic evolution of influenza viruses of humans appears to be faster than in swine and equines. In wild birds, within-subtype antigenic variation appears to be limited but has been observed in poultry.
Antigenic shift is a sudden, drastic change in an influenza virus' antigen, usually HA. During antigenic shift, antigenically different strains that infect the same cell can reassort genome segments with each other, producing hybrid progeny. Since all influenza viruses have segmented genomes, all are capable of reassortment. Antigenic shift only occurs among influenza viruses of the same genus and most commonly occurs among influenza A viruses. In particular, reassortment is very common in AIVs, creating a large diversity of influenza viruses in birds, but is uncommon in human, equine, and canine lineages. Pigs, bats, and quails have receptors for both mammalian and avian influenza A viruses, so they are potential "mixing vessels" for reassortment. If an animal strain reassorts with a human strain, then a novel strain can emerge that is capable of human-to-human transmission. This has caused pandemics, but only a limited number, so it is difficult to predict when the next will happen.
The Global Influenza Surveillance and Response System of the World Health Organization (GISRS) tests several millions of specimens annually to monitor the spread and evolution of influenza viruses.
Mechanism
Transmission
People who are infected can transmit influenza viruses through breathing, talking, coughing, and sneezing, which spread respiratory droplets and aerosols that contain virus particles into the air. A person susceptible to infection can contract influenza by coming into contact with these particles. Respiratory droplets are relatively large and travel less than two meters before falling onto nearby surfaces. Aerosols are smaller and remain suspended in the air longer, so they take longer to settle and can travel further. Inhalation of aerosols can lead to infection, but most transmission is in the area about two meters around an infected person via respiratory droplets that come into contact with mucosa of the upper respiratory tract. Transmission through contact with a person, bodily fluids, or intermediate objects (fomites) can also occur, since influenza viruses can survive for hours on non-porous surfaces. If one's hands are contaminated, then touching one's face can cause infection.
Influenza is usually transmissible from one day before the onset of symptoms to 5–7 days after. In healthy adults, the virus is shed for up to 3–5 days. In children and the immunocompromised, the virus may be transmissible for several weeks. Children ages 2–17 are considered to be the primary and most efficient spreaders of influenza. Children who have not had multiple prior exposures to influenza viruses shed the virus at greater quantities and for a longer duration than other children. People at risk of exposure to influenza include health care workers, social care workers, and those who live with or care for people vulnerable to influenza. In long-term care facilities, the flu can spread rapidly. A variety of factors likely encourage influenza transmission, including lower temperature, lower absolute and relative humidity, less ultraviolet radiation from the sun, and crowding. Influenza viruses that infect the upper respiratory tract like H1N1 tend to be more mild but more transmissible, whereas those that infect the lower respiratory tract like H5N1 tend to cause more severe illness but are less contagious.
Pathophysiology
In humans, influenza viruses first cause infection by infecting epithelial cells in the respiratory tract. Illness during infection is primarily the result of lung inflammation and compromise caused by epithelial cell infection and death, combined with inflammation caused by the immune system's response to infection. Non-respiratory organs can become involved, but the mechanisms by which influenza is involved in these cases are unknown. Severe respiratory illness can be caused by multiple, non-exclusive mechanisms, including obstruction of the airways, loss of alveolar structure, loss of lung epithelial integrity due to epithelial cell infection and death, and degradation of the extracellular matrix that maintains lung structure. In particular, alveolar cell infection appears to drive severe symptoms since this results in impaired gas exchange and enables viruses to infect endothelial cells, which produce large quantities of pro-inflammatory cytokines.
Pneumonia caused by influenza viruses is characterized by high levels of viral replication in the lower respiratory tract, accompanied by a strong pro-inflammatory response called a cytokine storm. Infection with H5N1 or H7N9 especially produces high levels of pro-inflammatory cytokines. In bacterial infections, early depletion of macrophages during influenza creates a favorable environment in the lungs for bacterial growth since these white blood cells are important in responding to bacterial infection. Host mechanisms to encourage tissue repair may inadvertently allow bacterial infection. Infection also induces production of systemic glucocorticoids that can reduce inflammation to preserve tissue integrity but allow increased bacterial growth.
The pathophysiology of influenza is significantly influenced by which receptors influenza viruses bind to during entry into cells. Mammalian influenza viruses preferentially bind to sialic acids connected to the rest of the oligosaccharide by an α-2,6 link, most commonly found in various respiratory cells, such as respiratory and retinal epithelial cells. AIVs prefer sialic acids with an α-2,3 linkage, which are most common in birds in gastrointestinal epithelial cells and in humans in the lower respiratory tract. Cleavage of the HA protein into HA, the binding subunit, and HA, the fusion subunit, is performed by different proteases, affecting which cells can be infected. For mammalian influenza viruses and low pathogenic AIVs, cleavage is extracellular, which limits infection to cells that have the appropriate proteases, whereas for highly pathogenic AIVs, cleavage is intracellular and performed by ubiquitous proteases, which allows for infection of a greater variety of cells, thereby contributing to more severe disease.
Immunology
Cells possess sensors to detect viral RNA, which can then induce interferon production. Interferons mediate expression of antiviral proteins and proteins that recruit immune cells to the infection site, and they notify nearby uninfected cells of infection. Some infected cells release pro-inflammatory cytokines that recruit immune cells to the site of infection. Immune cells control viral infection by killing infected cells and phagocytizing viral particles and apoptotic cells. An exacerbated immune response can harm the host organism through a cytokine storm. To counter the immune response, influenza viruses encode various non-structural proteins, including NS1, NEP, PB1-F2, and PA-X, that are involved in curtailing the host immune response by suppressing interferon production and host gene expression.
B cells, a type of white blood cell, produce antibodies that bind to influenza antigens HA and NA (or HEF) and other proteins to a lesser degree. Once bound to these proteins, antibodies block virions from binding to cellular receptors, neutralizing the virus. In humans, a sizeable antibody response occurs about one week after viral exposure. This antibody response is typically robust and long-lasting, especially for influenza C virus and influenza D virus. People exposed to a certain strain in childhood still possess antibodies to that strain at a reasonable level later in life, which can provide some protection to related strains. There is, however, an "original antigenic sin", in which the first HA subtype a person is exposed to influences the antibody-based immune response to future infections and vaccines.
Prevention
Vaccination
Annual vaccination is the primary and most effective way to prevent influenza and influenza-associated complications, especially for high-risk groups. Vaccines against the flu are trivalent or quadrivalent, providing protection against an H1N1 strain, an H3N2 strain, and one or two influenza B virus strains corresponding to the two influenza B virus lineages. Two types of vaccines are in use: inactivated vaccines that contain "killed" (i.e. inactivated) viruses and live attenuated influenza vaccines (LAIVs) that contain weakened viruses. There are three types of inactivated vaccines: whole virus, split virus, in which the virus is disrupted by a detergent, and subunit, which only contains the viral antigens HA and NA. Most flu vaccines are inactivated and administered via intramuscular injection. LAIVs are sprayed into the nasal cavity.
Vaccination recommendations vary by country. Some recommend vaccination for all people above a certain age, such as 6 months, whereas other countries limit recommendations to high-risk groups. Young infants cannot receive flu vaccines for safety reasons, but they can inherit passive immunity from their mother if vaccinated during pregnancy. Influenza vaccination helps to reduce the probability of reassortment.
In general, influenza vaccines are only effective if there is an antigenic match between vaccine strains and circulating strains. Most commercially available flu vaccines are manufactured by propagation of influenza viruses in embryonated chicken eggs, taking 6–8 months. Flu seasons are different in the northern and southern hemisphere, so the WHO meets twice a year, once for each hemisphere, to discuss which strains should be included based on observation from HA inhibition assays. Other manufacturing methods include an MDCK cell culture-based inactivated vaccine and a recombinant subunit vaccine manufactured from baculovirus overexpression in insect cells.
Antiviral chemoprophylaxis
Influenza can be prevented or reduced in severity by post-exposure prophylaxis with the antiviral drugs oseltamivir, which can be taken orally by those at least three months old, and zanamivir, which can be inhaled by those above seven years. Chemoprophylaxis is most useful for individuals at high risk for complications and those who cannot receive the flu vaccine. Post-exposure chemoprophylaxis is only recommended if oseltamivir is taken within 48 hours of contact with a confirmed or suspected case and zanamivir within 36 hours. It is recommended for people who have yet to receive a vaccine for the current flu season, who have been vaccinated less than two week since contact, if there is a significant mismatch between vaccine and circulating strains, or during an outbreak in a closed setting regardless of vaccination history.
Infection control
These are the main ways that influenza spreads
by direct transmission (when an infected person sneezes mucus directly into the eyes, nose or mouth of another person);
the airborne route (when someone inhales the aerosols produced by an infected person coughing, sneezing or spitting);
through hand-to-eye, hand-to-nose, or hand-to-mouth transmission, either from contaminated surfaces or from direct personal contact such as a hand-shake.
When vaccines and antiviral medications are limited, non-pharmaceutical interventions are essential to reduce transmission and spread. The lack of controlled studies and rigorous evidence of the effectiveness of some measures has hampered planning decisions and recommendations. Nevertheless, strategies endorsed by experts for all phases of flu outbreaks include hand and respiratory hygiene, self-isolation by symptomatic individuals and the use of face masks by them and their caregivers, surface disinfection, rapid testing and diagnosis, and contact tracing. In some cases, other forms of social distancing including school closures and travel restrictions are recommended.
Reasonably effective ways to reduce the transmission of influenza include good personal health and hygiene habits such as: not touching the eyes, nose or mouth; frequent hand washing (with soap and water, or with alcohol-based hand rubs); covering coughs and sneezes with a tissue or sleeve; avoiding close contact with sick people; and staying home when sick. Avoiding spitting is also recommended. Although face masks might help prevent transmission when caring for the sick, there is mixed evidence on beneficial effects in the community. Smoking raises the risk of contracting influenza, as well as producing more severe disease symptoms.
Since influenza spreads through both aerosols and contact with contaminated surfaces, surface sanitizing may help prevent some infections. Alcohol is an effective sanitizer against influenza viruses, while quaternary ammonium compounds can be used with alcohol so that the sanitizing effect lasts for longer. In hospitals, quaternary ammonium compounds and bleach are used to sanitize rooms or equipment that have been occupied by people with influenza symptoms. At home, this can be done effectively with a diluted chlorine bleach.
Since influenza viruses circulate in animals such as birds and pigs, prevention of transmission from these animals is important. Water treatment, indoor raising of animals, quarantining sick animals, vaccination, and biosecurity are the primary measures used. Placing poultry houses and piggeries on high ground away from high-density farms, backyard farms, live poultry markets, and bodies of water helps to minimize contact with wild birds. Closure of live poultry markets appears to the most effective measure and has shown to be effective at controlling the spread of H5N1, H7N9, and H9N2. Other biosecurity measures include cleaning and disinfecting facilities and vehicles, banning visits to poultry farms, not bringing birds intended for slaughter back to farms, changing clothes, disinfecting foot baths, and treating food and water.
If live poultry markets are not closed, then "clean days" when unsold poultry is removed and facilities are disinfected and "no carry-over" policies to eliminate infectious material before new poultry arrive can be used to reduce the spread of influenza viruses. If a novel influenza viruses has breached the aforementioned biosecurity measures, then rapid detection to stamp it out via quarantining, decontamination, and culling may be necessary to prevent the virus from becoming endemic. Vaccines exist for avian H5, H7, and H9 subtypes that are used in some countries. In China, for example, vaccination of domestic birds against H7N9 successfully limited its spread, indicating that vaccination may be an effective strategy if used in combination with other measures to limit transmission. In pigs and horses, management of influenza is dependent on vaccination with biosecurity.
Diagnosis
Diagnosis based on symptoms is fairly accurate in otherwise healthy people during seasonal epidemics and should be suspected in cases of pneumonia, acute respiratory distress syndrome (ARDS), sepsis, or if encephalitis, myocarditis, or breakdown of muscle tissue occur. Because influenza is similar to other viral respiratory tract illnesses, laboratory diagnosis is necessary for confirmation. Common sample collection methods for testing include nasal and throat swabs. Samples may be taken from the lower respiratory tract if infection has cleared the upper but not lower respiratory tract. Influenza testing is recommended for anyone hospitalized with symptoms resembling influenza during flu season or who is connected to an influenza case. For severe cases, earlier diagnosis improves patient outcome. Diagnostic methods that can identify influenza include viral cultures, antibody- and antigen-detecting tests, and nucleic acid-based tests.
Viruses can be grown in a culture of mammalian cells or embryonated eggs for 3–10 days to monitor cytopathic effect. Final confirmation can then be done via antibody staining, hemadsorption using red blood cells, or immunofluorescence microscopy. Shell vial cultures, which can identify infection via immunostaining before a cytopathic effect appears, are more sensitive than traditional cultures with results in 1–3 days. Cultures can be used to characterize novel viruses, observe sensitivity to antiviral drugs, and monitor antigenic drift, but they are relatively slow and require specialized skills and equipment.
Serological assays can be used to detect an antibody response to influenza after natural infection or vaccination. Common serological assays include hemagglutination inhibition assays that detect HA-specific antibodies, virus neutralization assays that check whether antibodies have neutralized the virus, and enzyme-linked immunoabsorbant assays. These methods tend to be relatively inexpensive and fast but are less reliable than nucleic-acid based tests.
Direct fluorescent or immunofluorescent antibody (DFA/IFA) tests involve staining respiratory epithelial cells in samples with fluorescently-labeled influenza-specific antibodies, followed by examination under a fluorescent microscope. They can differentiate between influenza A virus and influenza B virus but can not subtype influenza A virus. Rapid influenza diagnostic tests (RIDTs) are a simple way of obtaining assay results, are low cost, and produce results in less than 30 minutes, so they are commonly used, but they can not distinguish between influenza A virus and influenza B virus or between influenza A virus subtypes and are not as sensitive as nucleic-acid based tests.
Nucleic acid-based tests (NATs) amplify and detect viral nucleic acid. Most of these tests take a few hours, but rapid molecular assays are as fast as RIDTs. Among NATs, reverse transcription polymerase chain reaction (RT-PCR) is the most traditional and considered the gold standard for diagnosing influenza because it is fast and can subtype influenza A virus, but it is relatively expensive and more prone to false-positives than cultures. Other NATs that have been used include loop-mediated isothermal amplification-based assays, simple amplification-based assays, and nucleic acid sequence-based amplification. Nucleic acid sequencing methods can identify infection by obtaining the nucleic acid sequence of viral samples to identify the virus and antiviral drug resistance. The traditional method is Sanger sequencing, but it has been largely replaced by next-generation methods that have greater sequencing speed and throughput.
Management
Treatment in cases of mild or moderate illness is supportive and includes anti-fever medications such as acetaminophen and ibuprofen, adequate fluid intake to avoid dehydration, and rest. Cough drops and throat sprays may be beneficial for sore throat. It is recommended to avoid alcohol and tobacco use while ill. Aspirin is not recommended to treat influenza in children due to an elevated risk of developing Reye syndrome. Corticosteroids are not recommended except when treating septic shock or an underlying medical condition, such as chronic obstructive pulmonary disease or asthma exacerbation, since they are associated with increased mortality. If a secondary bacterial infection occurs, then antibiotics may be necessary.
Antivirals
Antiviral drugs are primarily used to treat severely ill patients, especially those with compromised immune systems. Antivirals are most effective when started in the first 48 hours after symptoms appear. Later administration may still be beneficial for those who have underlying immune defects, those with more severe symptoms, or those who have a higher risk of developing complications if these individuals are still shedding the virus. Antiviral treatment is also recommended if a person is hospitalized with suspected influenza instead of waiting for test results to return and if symptoms are worsening. Most antiviral drugs against influenza fall into two categories: neuraminidase (NA) inhibitors and M2 inhibitors. Baloxavir marboxil is a notable exception, which targets the endonuclease activity of the viral RNA polymerase and can be used as an alternative to NA and M2 inhibitors for influenza A virus and influenza B virus.
NA inhibitors target the enzymatic activity of NA receptors, mimicking the binding of sialic acid in the active site of NA on influenza A virus and influenza B virus virions so that viral release from infected cells and the rate of viral replication are impaired. NA inhibitors include oseltamivir, which is consumed orally in a prodrug form and converted to its active form in the liver, and zanamivir, which is a powder that is inhaled nasally. Oseltamivir and zanamivir are effective for prophylaxis and post-exposure prophylaxis, and research overall indicates that NA inhibitors are effective at reducing rates of complications, hospitalization, and mortality and the duration of illness. Additionally, the earlier NA inhibitors are provided, the better the outcome, though late administration can still be beneficial in severe cases. Other NA inhibitors include laninamivir and peramivir, the latter of which can be used as an alternative to oseltamivir for people who cannot tolerate or absorb it.
The adamantanes amantadine and rimantadine are orally administered drugs that block the influenza virus' M2 ion channel, preventing viral uncoating. These drugs are only functional against influenza A virus but are no longer recommended for use because of widespread resistance to them among influenza A viruses. Adamantane resistance first emerged in H3N2 in 2003, becoming worldwide by 2008. Oseltamivir resistance is no longer widespread because the 2009 pandemic H1N1 strain (H1N1 pdm09), which is resistant to adamantanes, seemingly replaced resistant strains in circulation. Since the 2009 pandemic, oseltamivir resistance has mainly been observed in patients undergoing therapy, especially the immunocompromised and young children. Oseltamivir resistance is usually reported in H1N1, but has been reported in H3N2 and influenza B viruss less commonly. Because of this, oseltamivir is recommended as the first drug of choice for immunocompetent people, whereas for the immunocompromised, oseltamivir is recommended against H3N2 and influenza B virus and zanamivir against H1N1 pdm09. Zanamivir resistance is observed less frequently, and resistance to peramivir and baloxavir marboxil is possible.
Prognosis
In healthy individuals, influenza infection is usually self-limiting and rarely fatal. Symptoms usually last for 2–8 days. Influenza can cause people to miss work or school, and it is associated with decreased job performance and, in older adults, reduced independence. Fatigue and malaise may last for several weeks after recovery, and healthy adults may experience pulmonary abnormalities that can take several weeks to resolve. Complications and mortality primarily occur in high-risk populations and those who are hospitalized. Severe disease and mortality are usually attributable to pneumonia from the primary viral infection or a secondary bacterial infection, which can progress to ARDS.
Other respiratory complications that may occur include sinusitis, bronchitis, bronchiolitis, excess fluid buildup in the lungs, and exacerbation of chronic bronchitis and asthma. Middle ear infection and croup may occur, most commonly in children. Secondary S. aureus infection has been observed, primarily in children, to cause toxic shock syndrome after influenza, with hypotension, fever, and reddening and peeling of the skin. Complications affecting the cardiovascular system are rare and include pericarditis, fulminant myocarditis with a fast, slow, or irregular heartbeat, and exacerbation of pre-existing cardiovascular disease. Inflammation or swelling of muscles accompanied by muscle tissue breaking down occurs rarely, usually in children, which presents as extreme tenderness and muscle pain in the legs and a reluctance to walk for 2–3 days.
Influenza can affect pregnancy, including causing smaller neonatal size, increased risk of premature birth, and an increased risk of child death shortly before or after birth. Neurological complications have been associated with influenza on rare occasions, including aseptic meningitis, encephalitis, disseminated encephalomyelitis, transverse myelitis, and Guillain–Barré syndrome. Additionally, febrile seizures and Reye syndrome can occur, most commonly in children. Influenza-associated encephalopathy can occur directly from central nervous system infection from the presence of the virus in blood and presents as sudden onset of fever with convulsions, followed by rapid progression to coma. An atypical form of encephalitis called encephalitis lethargica, characterized by headache, drowsiness, and coma, may rarely occur sometime after infection. In survivors of influenza-associated encephalopathy, neurological defects may occur. Primarily in children, in severe cases the immune system may rarely dramatically overproduce white blood cells that release cytokines, causing severe inflammation.
People who are at least 65 years of age, due to a weakened immune system from aging or a chronic illness, are a high-risk group for developing complications, as are children less than one year of age and children who have not been previously exposed to influenza viruses multiple times. Pregnant women are at an elevated risk, which increases by trimester and lasts up to two weeks after childbirth. Obesity, in particular a body mass index greater than 35–40, is associated with greater amounts of viral replication, increased severity of secondary bacterial infection, and reduced vaccination efficacy. People who have underlying health conditions are also considered at-risk, including those who have congenital or chronic heart problems or lung (e.g. asthma), kidney, liver, blood, neurological, or metabolic (e.g. diabetes) disorders, as are people who are immunocompromised from chemotherapy, asplenia, prolonged steroid treatment, splenic dysfunction, or HIV infection. Tobacco use, including past use, places a person at risk. The role of genetics in influenza is not well researched, but it may be a factor in influenza mortality.
Epidemiology
Influenza is typically characterized by seasonal epidemics and sporadic pandemics. Most of the burden of influenza is a result of flu seasons caused by influenza A virus and influenza B virus. Among influenza A virus subtypes, H1N1 and H3N2 circulate in humans and are responsible for seasonal influenza. Cases disproportionately occur in children, but most severe causes are among the elderly, the very young, and the immunocompromised. In a typical year, influenza viruses infect 5–15% of the global population, causing 3–5 million cases of severe illness annually and accounting for 290,000–650,000 deaths each year due to respiratory illness. 5–10% of adults and 20–30% of children contract influenza each year. The reported number of influenza cases is usually much lower than the actual number.
During seasonal epidemics, it is estimated that about 80% of otherwise healthy people who have a cough or sore throat have the flu. Approximately 30–40% of people hospitalized for influenza develop pneumonia, and about 5% of all severe pneumonia cases in hospitals are due to influenza, which is also the most common cause of ARDS in adults. In children, influenza and respiratory syncytial virus are the two most common causes of ARDS. About 3–5% of children each year develop otitis media due to influenza. Adults who develop organ failure from influenza and children who have PIM scores and acute renal failure have higher rates of mortality. During seasonal influenza, mortality is concentrated in the very young and the elderly, whereas during flu pandemics, young adults are often affected at a high rate.
In temperate regions, the number of influenza cases varies from season to season. Lower vitamin D levels, presumably due to less sunlight, lower humidity, lower temperature, and minor changes in virus proteins caused by antigenic drift contribute to annual epidemics that peak during the winter season. In the northern hemisphere, this is from October to May (more narrowly December to April), and in the southern hemisphere, this is from May to October (more narrowly June to September). There are therefore two distinct influenza seasons every year in temperate regions, one in the northern hemisphere and one in the southern hemisphere. In tropical and subtropical regions, seasonality is more complex and appears to be affected by various climatic factors such as minimum temperature, hours of sunshine, maximum rainfall, and high humidity. Influenza may therefore occur year-round in these regions. Influenza epidemics in modern times have the tendency to start in the eastern or southern hemisphere, with Asia being a key reservoir.
Influenza A virus and influenza B virus co-circulate, so have the same patterns of transmission. The seasonality of influenza C virus, however, is poorly understood. Influenza C virus infection is most common in children under the age of two, and by adulthood most people have been exposed to it. Influenza C virus-associated hospitalization most commonly occurs in children under the age of three and is frequently accompanied by co-infection with another virus or a bacterium, which may increase the severity of disease. When considering all hospitalizations for respiratory illness among young children, influenza C virus appears to account for only a small percentage of such cases. Large outbreaks of influenza C virus infection can occur, so incidence varies significantly.
Outbreaks of influenza caused by novel influenza viruses are common. Depending on the level of pre-existing immunity in the population, novel influenza viruses can spread rapidly and cause pandemics with millions of deaths. These pandemics, in contrast to seasonal influenza, are caused by antigenic shifts involving animal influenza viruses. To date, all known flu pandemics have been caused by influenza A viruses, and they follow the same pattern of spreading from an origin point to the rest of the world over the course of multiple waves in a year. Pandemic strains tend to be associated with higher rates of pneumonia in otherwise healthy individuals. Generally after each influenza pandemic, the pandemic strain continues to circulate as the cause of seasonal influenza, replacing prior strains. From 1700 to 1889, influenza pandemics occurred about once every 50–60 years. Since then, pandemics have occurred about once every 10–50 years, so they may be getting more frequent over time.
History
The first influenza epidemic may have occurred around 6,000 BC in China, and possible descriptions of influenza exist in Greek writings from the 5th century BC. In both 1173–1174 AD and 1387 AD, epidemics occurred across Europe that were named "influenza". Whether these epidemics or others were caused by influenza is unclear since there was then no consistent naming pattern for epidemic respiratory diseases, and "influenza" did not become clearly associated with respiratory disease until centuries later. Influenza may have been brought to the Americas as early as 1493, when an epidemic disease resembling influenza killed most of the population of the Antilles.
The first convincing record of an influenza pandemic was in 1510. It began in East Asia before spreading to North Africa and then Europe. Following the pandemic, seasonal influenza occurred, with subsequent pandemics in 1557 and 1580. The flu pandemic in 1557 was potentially the first time influenza was connected to miscarriage and death of pregnant women. The 1580 influenza pandemic originated in Asia during summer, spread to Africa, then Europe, and finally America. By the end of the 16th century, influenza was beginning to become understood as a specific, recognizable disease with epidemic and endemic forms. In 1648, it was discovered that horses also experience influenza.
Influenza data after 1700 is more accurate, so it is easier to identify flu pandemics after this point. The first flu pandemic of the 18th century started in 1729 in Russia in spring, spreading worldwide over the course of three years with distinct waves, the later ones being more lethal. Another flu pandemic occurred in 1781–1782, starting in China in autumn. From this pandemic, influenza became associated with sudden outbreaks of febrile illness. The next flu pandemic was from 1830 to 1833, beginning in China in winter. This pandemic had a high attack rate, but the mortality rate was low.
A minor influenza pandemic occurred from 1847 to 1851 at the same time as the third cholera pandemic and was the first flu pandemic to occur with vital statistics being recorded, so influenza mortality was clearly recorded for the first time. Fowl plague (now recognised as highly pathogenic avian influenza) was recognized in 1878 and was soon linked to transmission to humans. By the time of the 1889 pandemic, which may have been caused by an H2N2 strain, the flu had become an easily recognizable disease.
The microbial agent responsible for influenza was incorrectly identified in 1892 by R. F. J. Pfeiffer as the bacteria species Haemophilus influenzae, which retains "influenza" in its name. From 1901 to 1903, Italian and Austrian researchers were able to show that avian influenza, then called "fowl plague", was caused by a microscopic agent smaller than bacteria by using filters with pores too small for bacteria to pass through. The fundamental differences between viruses and bacteria, however, were not yet fully understood.
From 1918 to 1920, the Spanish flu pandemic became the most devastating influenza pandemic and one of the deadliest pandemics in history. The pandemic, caused by an H1N1 strain of influenza A, likely began in the United States before spreading worldwide via soldiers during and after the First World War. The initial wave in the first half of 1918 was relatively minor and resembled past flu pandemics, but the second wave later that year had a much higher mortality rate. A third wave with lower mortality occurred in many places a few months after the second. By the end of 1920, it is estimated that about a third to half of all people in the world had been infected, with tens of millions of deaths, disproportionately young adults. During the 1918 pandemic, the respiratory route of transmission was clearly identified and influenza was shown to be caused by a "filter passer", not a bacterium, but there remained a lack of agreement about influenza's cause for another decade and research on influenza declined. After the pandemic, H1N1 circulated in humans in seasonal form until the next pandemic.
In 1931, Richard Shope published three papers identifying a virus as the cause of swine influenza, a then newly recognized disease among pigs that was characterized during the second wave of the 1918 pandemic. Shope's research reinvigorated research on human influenza, and many advances in virology, serology, immunology, experimental animal models, vaccinology, and immunotherapy have since arisen from influenza research. Just two years after influenza viruses were discovered, in 1933, influenza A virus was identified as the agent responsible for human influenza. Subtypes of influenza A virus were discovered throughout the 1930s, and influenza B virus was discovered in 1940.
During the Second World War, the US government worked on developing inactivated vaccines for influenza, resulting in the first influenza vaccine being licensed in 1945 in the United States. Influenza C virus was discovered two years later in 1947. In 1955, avian influenza was confirmed to be caused by influenza A virus. Four influenza pandemics have occurred since WWII. The first of these was the Asian flu from 1957 to 1958, caused by an H2N2 strain and beginning in China's Yunnan province. The number of deaths probably exceeded one million, mostly among the very young and very old. This was the first flu pandemic to occur in the presence of a global surveillance system and laboratories able to study the novel influenza virus. After the pandemic, H2N2 was the influenza A virus subtype responsible for seasonal influenza. The first antiviral drug against influenza, amantadine, was approved in 1966, with additional antiviral drugs being used since the 1990s.
In 1968, H3N2 was introduced into humans through a rearrangement between an avian H3N2 strain and an H2N2 strain that was circulating in humans. The novel H3N2 strain emerged in Hong Kong and spread worldwide, causing the Hong Kong flu pandemic, which resulted in 500,000–2,000,000 deaths. This was the first pandemic to spread significantly by air travel. H2N2 and H3N2 co-circulated after the pandemic until 1971 when H2N2 waned in prevalence and was completely replaced by H3N2. In 1977, H1N1 reemerged in humans, possibly after it was released from a freezer in a laboratory accident, and caused a pseudo-pandemic. This H1N1 strain was antigenically similar to the H1N1 strains that circulated prior to 1957. Since 1977, both H1N1 and H3N2 have circulated in humans as part of seasonal influenza. In 1980, the classification system used to subtype influenza viruses was introduced.
At some point, influenza B virus diverged into two strains, named the B/Victoria-like and B/Yamagata-like lineages, both of which have been circulating in humans since 1983.
In 1996, a highly pathogenic H5N1 subtype of influenza A was detected in geese in Guangdong, China and a year later emerged in poultry in Hong Kong, gradually spreading worldwide from there. A small H5N1 outbreak in humans in Hong Kong occurred then, and sporadic human cases have occurred since 1997, carrying a high case fatality rate.
The most recent flu pandemic was the 2009 swine flu pandemic, which originated in Mexico and resulted in hundreds of thousands of deaths. It was caused by a novel H1N1 strain that was a reassortment of human, swine, and avian influenza viruses. The 2009 pandemic had the effect of replacing prior H1N1 strains in circulation with the novel strain but not any other influenza viruses. Consequently, H1N1, H3N2, and both influenza B virus lineages have been in circulation in seasonal form since the 2009 pandemic.
In 2011, influenza D virus was discovered in pigs in Oklahoma, USA, and cattle were later identified as the primary reservoir of influenza D virus.
In the same year, avian H7N9 was detected in China and began to cause human infections in 2013, starting in Shanghai and Anhui and remaining mostly in China. Highly pathogenic H7N9 emerged sometime in 2016 and has occasionally infected humans incidentally. Other avian influenza viruses have less commonly infected humans since the 1990s, including H5N1, H5N5, H5N6, H5N8, H6N1, H7N2, H7N7, and H10N7, and have begun to spread throughout much of the world since the 2010s. Future flu pandemics, which may be caused by an influenza virus of avian origin, are viewed as almost inevitable, and increased globalization has made it easier for a pandemic virus to spread, so there are continual efforts to prepare for future pandemics and improve the prevention and treatment of influenza.
Etymology
The word influenza comes from the Italian word , from medieval Latin , originally meaning 'visitation' or 'influence'. Terms such as , meaning 'influence of the cold', and , meaning 'influence of the stars' are attested from the 14th century. The latter referred to the disease's cause, which at the time was ascribed by some to unfavorable astrological conditions. As early as 1504, began to mean a 'visitation' or 'outbreak' of any disease affecting many people in a single place at once. During an outbreak of influenza in 1743 that started in Italy and spread throughout Europe, the word reached the English language and was anglicized in pronunciation. Since the mid-1800s, influenza has also been used to refer to severe colds. The shortened form of the word, "flu", is first attested in 1839 as flue with the spelling flu confirmed in 1893. Other names that have been used for influenza include epidemic catarrh, la grippe from French, sweating sickness, and, especially when referring to the 1918 pandemic strain, Spanish fever.
In animals
Birds
Aquatic birds such as ducks, geese, shorebirds, and gulls are the primary reservoir of influenza A viruses (IAVs).
Because of the impact of avian influenza on economically important chicken farms, a classification system was devised in 1981 which divided avian virus strains as either highly pathogenic (and therefore potentially requiring vigorous control measures) or low pathogenic. The test for this is based solely on the effect on chickens – a virus strain is highly pathogenic avian influenza (HPAI) if 75% or more of chickens die after being deliberately infected with it. The alternative classification is low pathogenic avian influenza (LPAI) which produces mild or no symptoms. This classification system has since been modified to take into account the structure of the virus' haemagglutinin protein. At the genetic level, an AIV can be identified as an HPAI virus if it has a multibasic cleavage site in the HA protein, which contains additional residues in the HA gene. Other species of birds, especially water birds, can become infected with HPAI virus without experiencing severe symptoms and can spread the infection over large distances; the exact symptoms depend on the species of bird and the strain of virus. Classification of an avian virus strain as HPAI or LPAI does not predict how serious the disease might be if it infects humans or other mammals.
Symptoms of HPAI infection in chickens include lack of energy and appetite, decreased egg production, soft-shelled or misshapen eggs, swelling of the head, comb, wattles, and hocks, purple discoloration of wattles, combs, and legs, nasal discharge, coughing, sneezing, incoordination, and diarrhea; birds infected with an HPAI virus may also die suddenly without any signs of infection. Notable HPAI viruses include influenza A (H5N1) and A (H7N9). HPAI viruses have been a major disease burden in the 21st century, resulting in the death of large numbers of birds. In H7N9's case, some circulating strains were originally low pathogenic but became high pathogenic by mutating to acquire the HA multibasic cleavage site. Avian H9N2 is also of concern because although it is low pathogenic, it is a common donor of genes to H5N1 and H7N9 during reassortment.
Migratory birds can spread influenza across long distances. An example of this was when an H5N1 strain in 2005 infected birds at Qinghai Lake, China, which is a stopover and breeding site for many migratory birds, subsequently spreading the virus to more than 20 countries across Asia, Europe, and the Middle East. AIVs can be transmitted from wild birds to domestic free-range ducks and in turn to poultry through contaminated water, aerosols, and fomites. Ducks therefore act as key intermediates between wild and domestic birds. Transmission to poultry typically occurs in backyard farming and live animal markets where multiple species interact with each other. From there, AIVs can spread to poultry farms in the absence of adequate biosecurity. Among poultry, HPAI transmission occurs through aerosols and contaminated feces, cages, feed, and dead animals. Back-transmission of HPAI viruses from poultry to wild birds has occurred and is implicated in mass die-offs and intercontinental spread.
AIVs have occasionally infected humans through aerosols, fomites, and contaminated water. Direction transmission from wild birds is rare. Instead, most transmission involves domestic poultry, mainly chickens, ducks, and geese but also a variety of other birds such as guinea fowl, partridge, pheasants, and quails. The primary risk factor for infection with AIVs is exposure to birds in farms and live poultry markets. Typically, infection with an AIV has an incubation period of 3–5 days but can be up to 9 days. H5N1 and H7N9 cause severe lower respiratory tract illness, whereas other AIVs such as H9N2 cause a more mild upper respiratory tract illness, commonly with conjunctivitis. Limited transmission of avian H2, H5-7, H9, and H10 subtypes from one person to another through respiratory droplets, aerosols, and fomites has occurred, but sustained human-to-human transmission of AIVs has not occurred.
Pigs
Influenza in pigs is a respiratory disease similar to influenza in humans and is found worldwide. Asymptomatic infections are common. Symptoms typically appear 1–3 days after infection and include fever, lethargy, anorexia, weight loss, labored breathing, coughing, sneezing, and nasal discharge. In sows, pregnancy may be aborted. Complications include secondary infections and potentially fatal bronchopneumonia. Pigs become contagious within a day of infection and typically spread the virus for 7–10 days, which can spread rapidly within a herd. Pigs usually recover within 3–7 days after symptoms appear. Prevention and control measures include inactivated vaccines and culling infected herds. Influenza A virus subtypes H1N1, H1N2, and H3N2 are usually responsible for swine flu.
Some influenza A viruses can be transmitted via aerosols from pigs to humans and vice versa. Pigs, along with bats and quails, are recognized as a mixing vessel of influenza viruses because they have both α-2,3 and α-2,6 sialic acid receptors in their respiratory tract. Because of that, both avian and mammalian influenza viruses can infect pigs. If co-infection occurs, reassortment is possible. A notable example of this was the reassortment of a swine, avian, and human influenza virus that caused the 2009 flu pandemic. Spillover events from humans to pigs appear to be more common than from pigs to humans.
Other animals
Influenza viruses have been found in many other animals, including cattle, horses, dogs, cats, and marine mammals. Nearly all influenza A viruses are apparently descended from ancestral viruses in birds. The exception are bat influenza-like viruses, which have an uncertain origin. These bat viruses have HA and NA subtypes H17, H18, N10, and N11. H17N10 and H18N11 are unable to reassort with other influenza A viruses, but they are still able to replicate in other mammals.
Equine influenza A viruses include H7N7 and two lineages of H3N8. H7N7, however, has not been detected in horses since the late 1970s, so it may have become extinct in horses. H3N8 in equines spreads via aerosols and causes respiratory illness. Equine H3N8 preferentially binds to α-2,3 sialic acids, so horses are usually considered dead-end hosts, but transmission to dogs and camels has occurred, raising concerns that horses may be mixing vessels for reassortment. In canines, the only influenza A viruses in circulation are equine-derived H3N8 and avian-derived H3N2. Canine H3N8 has not been observed to reassort with other subtypes. H3N2 has a much broader host range and can reassort with H1N1 and H5N1. An isolated case of H6N1, likely from a chicken, was found infecting a dog, so other AIVs may emerge in canines.
A wide range of other mammals have been affected by avian influenza A viruses, generally due to eating birds which had been infected. There have been instances where transmission of the disease between mammals, including seals and cows, may have occurred. Various mutations have been identified that are associated with AIVs adapting to mammals. Since HA proteins vary in which sialic acids they bind to, mutations in the HA receptor binding site can allow AIVs to infect mammals. Other mutations include mutations affecting which sialic acids NA proteins cleave and a mutation in the PB2 polymerase subunit that improves tolerance of lower temperatures in mammalian respiratory tracts and enhances RNP assembly by stabilizing NP and PB2 binding.
Influenza B virus is mainly found in humans but has also been detected in pigs, dogs, horses, and seals. Likewise, influenza C virus primarily infects humans but has been observed in pigs, dogs, cattle, and dromedary camels. Influenza D virus causes an influenza-like illness in pigs but its impact in its natural reservoir, cattle, is relatively unknown. It may cause respiratory disease resembling human influenza on its own, or it may be part of a bovine respiratory disease (BRD) complex with other pathogens during co-infection. BRD is a concern for the cattle industry, so influenza D virus' possible involvement in BRD has led to research on vaccines for cattle that can provide protection against influenza D virus. Two antigenic lineages are in circulation: D/swine/Oklahoma/1334/2011 (D/OK) and D/bovine/Oklahoma/660/2013 (D/660).
References
Further reading
Airborne diseases
Animal viral diseases
Healthcare-associated infections
Vaccine-preventable diseases
Wikipedia emergency medicine articles ready to translate
Wikipedia medicine articles ready to translate
Zoonoses | Influenza | [
"Biology"
] | 13,772 | [
"Vaccination",
"Vaccine-preventable diseases"
] |
118,396 | https://en.wikipedia.org/wiki/Band%20gap | In solid-state physics and solid-state chemistry, a band gap, also called a bandgap or energy gap, is an energy range in a solid where no electronic states exist. In graphs of the electronic band structure of solids, the band gap refers to the energy difference (often expressed in electronvolts) between the top of the valence band and the bottom of the conduction band in insulators and semiconductors. It is the energy required to promote an electron from the valence band to the conduction band. The resulting conduction-band electron (and the electron hole in the valence band) are free to move within the crystal lattice and serve as charge carriers to conduct electric current. It is closely related to the HOMO/LUMO gap in chemistry. If the valence band is completely full and the conduction band is completely empty, then electrons cannot move within the solid because there are no available states. If the electrons are not free to move within the crystal lattice, then there is no generated current due to no net charge carrier mobility. However, if some electrons transfer from the valence band (mostly full) to the conduction band (mostly empty), then current can flow (see carrier generation and recombination). Therefore, the band gap is a major factor determining the electrical conductivity of a solid. Substances having large band gaps (also called "wide" band gaps) are generally insulators, those with small band gaps (also called "narrow" band gaps) are semiconductors, and conductors either have very small band gaps or none, because the valence and conduction bands overlap to form a continuous band.
It is possible to produce laser induced insulator-metal transitions which have already been experimentally observed in some condensed matter systems, like thin films of , doped manganites, or in vanadium sesquioxide . These are special cases of the more general metal-to-nonmetal transitions phenomena which were intensively studied in the last decades. A one-dimensional analytic model of laser induced distortion of band structure was presented for a spatially periodic (cosine) potential. This problem is periodic both in space and time and can be solved analytically using the Kramers-Henneberger co-moving frame. The solutions can be given with the help of the Mathieu functions.
In semiconductor physics
Every solid has its own characteristic energy-band structure. This variation in band structure is responsible for the wide range of electrical characteristics observed in various materials.
Depending on the dimension, the band structure and spectroscopy can vary. The different types of dimensions are as listed: one dimension, two dimensions, and three dimensions.
In semiconductors and insulators, electrons are confined to a number of bands of energy, and forbidden from other regions because there are no allowable electronic states for them to occupy. The term "band gap" refers to the energy difference between the top of the valence band and the bottom of the conduction band. Electrons are able to jump from one band to another. However, in order for a valence band electron to be promoted to the conduction band, it requires a specific minimum amount of energy for the transition. This required energy is an intrinsic characteristic of the solid material. Electrons can gain enough energy to jump to the conduction band by absorbing either a phonon (heat) or a photon (light).
A semiconductor is a material with an intermediate-sized, non-zero band gap that behaves as an insulator at T=0K, but allows thermal excitation of electrons into its conduction band at temperatures that are below its melting point. In contrast, a material with a large band gap is an insulator. In conductors, the valence and conduction bands may overlap, so there is no longer a bandgap with forbidden regions of electronic states.
The conductivity of intrinsic semiconductors is strongly dependent on the band gap. The only available charge carriers for conduction are the electrons that have enough thermal energy to be excited across the band gap and the electron holes that are left off when such an excitation occurs.
Band-gap engineering is the process of controlling or altering the band gap of a material by controlling the composition of certain semiconductor alloys, such as GaAlAs, InGaAs, and InAlAs. It is also possible to construct layered materials with alternating compositions by techniques like molecular-beam epitaxy. These methods are exploited in the design of heterojunction bipolar transistors (HBTs), laser diodes and solar cells.
The distinction between semiconductors and insulators is a matter of convention. One approach is to think of semiconductors as a type of insulator with a narrow band gap. Insulators with a larger band gap, usually greater than 4 eV, are not considered semiconductors and generally do not exhibit semiconductive behaviour under practical conditions. Electron mobility also plays a role in determining a material's informal classification.
The band-gap energy of semiconductors tends to decrease with increasing temperature. When temperature increases, the amplitude of atomic vibrations increase, leading to larger interatomic spacing. The interaction between the lattice phonons and the free electrons and holes will also affect the band gap to a smaller extent. The relationship between band gap energy and temperature can be described by Varshni's empirical expression (named after Y. P. Varshni),
, where Eg(0), α and β are material constants.
Furthermore, lattice vibrations increase with temperature, which increases the effect of electron scattering. Additionally, the number of charge carriers within a semiconductor will increase, as more carriers have the energy required to cross the band-gap threshold and so conductivity of semiconductors also increases with increasing temperature. The external pressure also influences the electronic structure of semiconductors and, therefore, their optical band gaps.
In a regular semiconductor crystal, the band gap is fixed owing to continuous energy states. In a quantum dot crystal, the band gap is size dependent and can be altered to produce a range of energies between the valence band and conduction band. It is also known as quantum confinement effect.
Band gaps can be either direct or indirect, depending on the electronic band structure of the material.
It was mentioned earlier that the dimensions have different band structure and spectroscopy. For non-metallic solids, which are one dimensional, have optical properties that are dependent on the electronic transitions between valence and conduction bands. In addition, the spectroscopic transition probability is between the initial and final orbital and it depends on the integral. φi is the initial orbital, φf is the final orbital, ʃ φf*ûεφi is the integral, ε is the electric vector, and u is the dipole moment.
Two-dimensional structures of solids behave because of the overlap of atomic orbitals. The simplest two-dimensional crystal contains identical atoms arranged on a square lattice. Energy splitting occurs at the Brillouin zone edge for one-dimensional situations because of a weak periodic potential, which produces a gap between bands. The behavior of the one-dimensional situations does not occur for two-dimensional cases because there are extra freedoms of motion. Furthermore, a bandgap can be produced with strong periodic potential for two-dimensional and three-dimensional cases.
Direct and indirect band gap
Based on their band structure, materials are characterised with a direct band gap or indirect band gap. In the free-electron model, k is the momentum of a free electron and assumes unique values within the Brillouin zone that outlines the periodicity of the crystal lattice. If the momentum of the lowest energy state in the conduction band and the highest energy state of the valence band of a material have the same value, then the material has a direct bandgap. If they are not the same, then the material has an indirect band gap and the electronic transition must undergo momentum transfer to satisfy conservation. Such indirect "forbidden" transitions still occur, however at very low probabilities and weaker energy. For materials with a direct band gap, valence electrons can be directly excited into the conduction band by a photon whose energy is larger than the bandgap. In contrast, for materials with an indirect band gap, a photon and phonon must both be involved in a transition from the valence band top to the conduction band bottom, involving a momentum change. Therefore, direct bandgap materials tend to have stronger light emission and absorption properties and tend to be better suited for photovoltaics (PVs), light-emitting diodes (LEDs), and laser diodes; however, indirect bandgap materials are frequently used in PVs and LEDs when the materials have other favorable properties.
Light-emitting diodes and laser diodes
LEDs and laser diodes usually emit photons with energy close to and slightly larger than the band gap of the semiconductor material from which they are made. Therefore, as the band gap energy increases, the LED or laser color changes from infrared to red, through the rainbow to violet, then to UV.
Photovoltaic cells
The optical band gap (see below) determines what portion of the solar spectrum a photovoltaic cell absorbs. Strictly, a semiconductor will not absorb photons of energy less than the band gap; whereas most of the photons with energies exceeding the band gap will generate heat. Neither of them contribute to the efficiency of a solar cell. One way to circumvent this problem is based on the so-called photon management concept, in which case the solar spectrum is modified to match the absorption profile of the solar cell.
List of band gaps
Below are band gap values for some selected materials. For a comprehensive list of band gaps in semiconductors, see List of semiconductor materials.
Optical versus electronic bandgap
In materials with a large exciton binding energy, it is possible for a photon to have just barely enough energy to create an exciton (bound electron–hole pair), but not enough energy to separate the electron and hole (which are electrically attracted to each other). In this situation, there is a distinction between "optical band gap" and "electronic band gap" (or "transport gap"). The optical bandgap is the threshold for photons to be absorbed, while the transport gap is the threshold for creating an electron–hole pair that is not bound together. The optical bandgap is at lower energy than the transport gap.
In almost all inorganic semiconductors, such as silicon, gallium arsenide, etc., there is very little interaction between electrons and holes (very small exciton binding energy), and therefore the optical and electronic bandgap are essentially identical, and the distinction between them is ignored. However, in some systems, including organic semiconductors and single-walled carbon nanotubes, the distinction may be significant.
Band gaps for other quasi-particles
In photonics, band gaps or stop bands are ranges of photon frequencies where, if tunneling effects are neglected, no photons can be transmitted through a material. A material exhibiting this behaviour is known as a photonic crystal. The concept of hyperuniformity has broadened the range of photonic band gap materials, beyond photonic crystals. By applying the technique in supersymmetric quantum mechanics, a new class of optical disordered materials has been suggested, which support band gaps perfectly equivalent to those of crystals or quasicrystals.
Similar physics applies to phonons in a phononic crystal.
Materials
Aluminium gallium arsenide
Boron nitride
Indium gallium arsenide
Indium arsenide
Gallium arsenide
Gallium nitride
Germanium
Metallic hydrogen
List of electronics topics
Electronics
Bandgap voltage reference
Condensed matter physics
Direct and indirect bandgaps
Electrical conduction
Electron hole
Field-effect transistor
Light-emitting diode
Photodiode
Photoresistor
Photovoltaics
Solar cell
Solid state physics
Semiconductor
Semiconductor devices
Strongly correlated material
Valence band
See also
Wide-bandgap semiconductors
Band bending
Spectral density
Pseudogap
Tauc plot
Moss–Burstein effect
Urbach energy
References
External links
Direct Band Gap Energy Calculator
Electron states
Electronic band structures
Quantum mechanics
Spectroscopy
Nuclear magnetic resonance | Band gap | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,501 | [
"Electron",
"Electron states",
"Nuclear magnetic resonance",
"Spectrum (physical sciences)",
"Molecular physics",
"Instrumental analysis",
"Theoretical physics",
"Quantum mechanics",
"Electronic band structures",
"Condensed matter physics",
"Nuclear physics",
"Spectroscopy"
] |
118,450 | https://en.wikipedia.org/wiki/Innovation | Innovation is the practical implementation of ideas that result in the introduction of new goods or services or improvement in offering goods or services. ISO TC 279 in the standard ISO 56000:2020 defines innovation as "a new or changed entity, realizing or redistributing value". Others have different definitions; a common element in the definitions is a focus on newness, improvement, and spread of ideas or technologies.
Innovation often takes place through the development of more-effective products, processes, services, technologies, art works
or business models that innovators make available to markets, governments and society.
Innovation is related to, but not the same as, invention: innovation is more apt to involve the practical implementation of an invention (i.e. new / improved ability) to make a meaningful impact in a market or society, and not all innovations require a new invention.
Technical innovation often manifests itself via the engineering process when the problem being solved is of a technical or scientific nature. The opposite of innovation is exnovation.
Definition
Surveys of the literature on innovation have found a variety of definitions. In 2009, Baregheh et al. found around 60 definitions in different scientific papers, while a 2014 survey found over 40. Based on their survey, Baragheh et al. attempted to formulate a multidisciplinary definition and arrived at the following:"Innovation is the multi-stage process whereby organizations transform ideas into new/improved products, service or processes, in order to advance, compete and differentiate themselves successfully in their marketplace"
In a study of how the software industry considers innovation, the following definition given by Crossan and Apaydin was considered to be the most complete. Crossan and Apaydin built on the definition given in the Organisation for Economic Co-operation and Development (OECD) Oslo Manual:
American sociologist Everett Rogers, defined it as follows:"An idea, practice, or object that is perceived as new by an individual or other unit of adoption"
According to Alan Altshuler and Robert D. Behn, innovation includes original invention and creative use. These writers define innovation as generation, admission and realization of new ideas, products, services and processes.
Two main dimensions of innovation are degree of novelty (i.e. whether an innovation is new to the firm, new to the market, new to the industry, or new to the world) and kind of innovation (i.e. whether it is process or product-service system innovation). Organizational researchers have also distinguished innovation separately from creativity, by providing an updated definition of these two related constructs:
Peter Drucker wrote:
Creativity and innovation
In general, innovation is distinguished from creativity by its emphasis on the implementation of creative ideas in an economic setting. Amabile and Pratt in 2016, drawing on the literature, distinguish between creativity ("the production of novel and useful ideas by an individual or small group of individuals working together") and innovation ("the successful implementation of creative ideas within an organization").
Economics and innovation
In 1957 the economist Robert Solow was able to demonstrate that economic growth had two components. The first component could be attributed to growth in production including wage labour and capital. The second component was found to be productivity. Ever since, economic historians have tried to explain the process of innovation itself, rather than assuming that technological inventions and technological progress result in productivity growth.
The concept of innovation emerged after the Second World War, mostly thanks to the works of Joseph Schumpeter (1883–1950) who described the economic effects of innovation processes as Constructive destruction. Today, consistent neo-Schumpeterian scholars see innovation not as neutral or apolitical processes. Rather, innovation can be seen as socially constructed processes. Therefore, its conception depends on the political and societal context in which innovation is taking place. According to Shannon Walsh, "innovation today is best understood as innovation under capital" (p. 346). This means that the current hegemonic purpose for innovation is capital valorisation and profit maximization, exemplified by the appropriation of knowledge (e.g., through patenting), the widespread practice of Planned obsolescence (incl. lack of repairability by design), and the Jevons paradox, that describes negative consequences of eco-efficiency as energy-reducing effects tend to trigger mechanisms leading to energy-increasing effects.
Types
Several frameworks have been proposed for defining types of innovation.
Sustaining vs disruptive innovation
One framework proposed by Clayton Christensen draws a distinction between sustaining and disruptive innovations. Sustaining innovation is the improvement of a product or service based on the known needs of current customers (e.g. faster microprocessors, flat screen televisions). Disruptive innovation in contrast refers to a process by which a new product or service creates a new market (e.g. transistor radio, free crowdsourced encyclopedia, etc.), eventually displacing established competitors. According to Christensen, disruptive innovations are critical to long-term success in business.
Disruptive innovation is often enabled by disruptive technology. Marco Iansiti and Karim R. Lakhani define foundational technology as having the potential to create new foundations for global technology systems over the longer term. Foundational technology tends to transform business operating models as entirely new business models emerge over many years, with gradual and steady adoption of the innovation leading to waves of technological and institutional change that gain momentum more slowly. The advent of the packet-switched communication protocol TCP/IP—originally introduced in 1972 to support a single use case for United States Department of Defense electronic communication (email), and which gained widespread adoption only in the mid-1990s with the advent of the World Wide Web—is a foundational technology.
Four types of innovation model
Another framework was suggested by Henderson and Clark. They divide innovation into four types;
Radical innovation: "establishes a new dominant design and, hence, a new set of core design concepts embodied in components that are linked together in a new architecture." (p. 11)
Incremental innovation: "refines and extends an established design. Improvement occurs in individual components, but the underlying core design concepts, and the links between them, remain the same." (p. 11)
Architectural innovation: "innovation that changes only the relationships between them [the core design concepts]" (p. 12)
Modular Innovation: "innovation that changes only the core design concepts of a technology" (p. 12)
While Henderson and Clark as well as Christensen talk about technical innovation there are other kinds of innovation as well, such as service innovation and organizational innovation.
Non-economic innovation
As distinct from business-centric views of innovation concentrating on generating profit for a firm, other types of innovation include: social innovation, religious innovation,
sustainable innovation (or green innovation),
and responsible innovation.
Open innovation
One type of innovation that has been the focus of recent literature is open innovation or "crowd sourcing." Open innovation refers to the use of individuals outside of an organizational context who have no expertise in a given area to solve complex problems.
User innovation
Similar to open innovation, user innovation is when companies rely on users of their goods and services to come up with, help to develop, and even help to implement new ideas.
History
Innovation must be understood in the historical setting in which its processes were and are taking place. The first full-length discussion about innovation was published by the Greek philosopher and historian Xenophon (430–355 BCE). He viewed the concept as multifaceted and connected it to political action. The word for innovation that he uses, kainotomia, had previously occurred in two plays by Aristophanes ( – BCE). Plato (died BCE) discussed innovation in his Laws dialogue and was not very fond of the concept. He was skeptical to it both in culture (dancing and art) and in education (he did not believe in introducing new games and toys to the kids). Aristotle (384–322 BCE) did not like organizational innovations: he believed that all possible forms of organization had been discovered.
Before the 4th century in Rome, the words novitas and res nova / nova res were used with either negative or positive judgment on the innovator. This concept meant "renewing" and was incorporated into the new Latin verb word innovo ("I renew" or "I restore") in the centuries that followed. The Vulgate version of the Bible (late 4th century CE) used the word in spiritual as well as political contexts. It also appeared in poetry, mainly with spiritual connotations, but was also connected to political, material and cultural aspects.
Machiavelli's The Prince (1513) discusses innovation in a political setting. Machiavelli portrays it as a strategy a Prince may employ in order to cope with a constantly changing world as well as the corruption within it. Here innovation is described as introducing change in government (new laws and institutions); Machiavelli's later book The Discourses (1528) characterises innovation as imitation, as a return to the original that has been corrupted by people and by time. Thus for Machiavelli innovation came with positive connotations. This is however an exception in the usage of the concept of innovation from the 16th century and onward. No innovator from the renaissance until the late 19th century ever thought of applying the word innovator upon themselves, it was a word used to attack enemies.
From the 1400s through the 1600s, the concept of innovation was pejorative – the term was an early-modern synonym for "rebellion", "revolt" and "heresy". In the 1800s people promoting capitalism saw socialism as an innovation and spent a lot of energy working against it. For instance, Goldwin Smith (1823-1910) saw the spread of social innovations as an attack on money and banks. These social innovations were socialism, communism, nationalization, cooperative associations.
In the 20th century, the concept of innovation did not become popular until after the Second World War of 1939–1945. This is the point in time when people started to talk about technological product innovation and tie it to the idea of economic growth and competitive advantage. Joseph Schumpeter (1883–1950), who contributed greatly to the study of innovation economics, is seen as the one who made the term popular. Schumpeter argued that industries must incessantly revolutionize the economic structure from within, that is: innovate with better or more effective processes and products, as well as with market distribution (such as the transition from the craft shop to factory). He famously asserted that "creative destruction is the essential fact about capitalism". In business and in economics, innovation can provide a catalyst for growth when entrepreneurs continuously search for better ways to satisfy their consumer base with improved quality, durability, service and price - searches which may come to fruition in innovation with advanced technologies and organizational strategies. Schumpeter's findings coincided with rapid advances in transportation and communications in the beginning of the 20th century, which had huge impacts for the economic concepts of factor endowments and comparative advantage as new combinations of resources or production techniques constantly transform markets to satisfy consumer needs. Hence, innovative behaviour becomes relevant for economic success.
Process of innovation
An early model included only three phases of innovation. According to Utterback (1971), these phases were: 1) idea generation, 2) problem solving, and 3) implementation. By the time one completed phase 2, one had an invention, but until one got it to the point of having an economic impact, one did not have an innovation. Diffusion was not considered a phase of innovation. Focus at this point in time was on manufacturing.
A prime example of innovation involved the boom of Silicon Valley start-ups out of the Stanford Industrial Park. In 1957, dissatisfied employees of Shockley Semiconductor, the company of Nobel laureate William Shockley, co-inventor of the transistor, left to form an independent firm, Fairchild Semiconductor. After several years, Fairchild developed into a formidable presence in the sector. Eventually, these founders left to start their own companies based on their own unique ideas, and then leading employees started their own firms. Over the next 20 years this process resulted in the momentous startup-company explosion of information-technology firms. Silicon Valley began as 65 new enterprises born out of Shockley's eight former employees.
All organizations can innovate, including for example hospitals, universities, and local governments. The organization requires a proper structure in order to retain competitive advantage. Organizations can also improve profits and performance by providing work groups opportunities and resources to innovate, in addition to employee's core job tasks. Executives and managers have been advised to break away from traditional ways of thinking and use change to their advantage. The world of work is changing with the increased use of technology and companies are becoming increasingly competitive. Companies will have to downsize or reengineer their operations to remain competitive. This will affect employment as businesses will be forced to reduce the number of people employed while accomplishing the same amount of work if not more.
For instance, former Mayor Martin O'Malley pushed the City of Baltimore to use CitiStat, a performance-measurement data and management system that allows city officials to maintain statistics on several areas from crime trends to the conditions of potholes. This system aided in better evaluation of policies and procedures with accountability and efficiency in terms of time and money. In its first year, CitiStat saved the city $13.2 million. Even mass transit systems have innovated with hybrid bus fleets to real-time tracking at bus stands. In addition, the growing use of mobile data terminals in vehicles, that serve as communication hubs between vehicles and a control center, automatically send data on location, passenger counts, engine performance, mileage and other information. This tool helps to deliver and manage transportation systems.
Still other innovative strategies include hospitals digitizing medical information in electronic medical records. For example, the U.S. Department of Housing and Urban Development's HOPE VI initiatives turned severely distressed public housing in urban areas into revitalized, mixed-income environments; the Harlem Children's Zone used a community-based approach to educate local area children; and the Environmental Protection Agency's brownfield grants facilitates turning over brownfields for environmental protection, green spaces, community and commercial development.
Sources of innovation
Innovation may occur due to effort from a range of different agents, by chance, or as a result of a major system failure. According to Peter F. Drucker, the general sources of innovations are changes in industry structure, in market structure, in local and global demographics, in human perception, in the amount of available scientific knowledge, etc.
In the simplest linear model of innovation the traditionally recognized source is manufacturer innovation. This is where a person or business innovates in order to sell the innovation.
Another source of innovation is end-user innovation. This is where a person or company develops an innovation for their own (personal or in-house) use because existing products do not meet their needs. MIT economist Eric von Hippel identified end-user innovation as the most important source in his classic book on the subject, "The Sources of Innovation".
The robotics engineer Joseph F. Engelberger asserts that innovations require only three things:
a recognized need
competent people with relevant technology
financial support
The Kline chain-linked model of innovation places emphasis on potential market needs as drivers of the innovation process, and describes the complex and often iterative feedback loops between marketing, design, manufacturing, and R&D.
In the 21st century the Islamic State (IS) movement, while decrying religious innovations, has innovated in military tactics, recruitment, ideology and geopolitical activity.
Facilitating innovation
Innovation by businesses is achieved in many ways, with much attention now given to formal research and development (R&D) for "breakthrough innovations". R&D help spur on patents and other scientific innovations that leads to productive growth in such areas as industry, medicine, engineering, and government. Yet, innovations can be developed by less formal on-the-job modifications of practice, through exchange and combination of professional experience and by many other routes. Investigation of relationship between the concepts of innovation and technology transfer revealed overlap. The more radical and revolutionary innovations tend to emerge from R&D, while more incremental innovations may emerge from practice – but there are many exceptions to each of these trends.
Information technology and changing business processes and management style can produce a work climate favorable to innovation. For example, the software tool company Atlassian conducts quarterly "ShipIt Days" in which employees may work on anything related to the company's products. Google employees work on self-directed projects for 20% of their time (known as Innovation Time Off). Both companies cite these bottom-up processes as major sources for new products and features.
An important innovation factor includes customers buying products or using services. As a result, organizations may incorporate users in focus groups (user centered approach), work closely with so-called lead users (lead user approach), or users might adapt their products themselves. The lead user method focuses on idea generation based on leading users to develop breakthrough innovations. U-STIR, a project to innovate Europe's surface transportation system, employs such workshops. Regarding this user innovation, a great deal of innovation is done by those actually implementing and using technologies and products as part of their normal activities. Sometimes user-innovators may become entrepreneurs, selling their product, they may choose to trade their innovation in exchange for other innovations, or they may be adopted by their suppliers. Nowadays, they may also choose to freely reveal their innovations, using methods like open source. In such networks of innovation the users or communities of users can further develop technologies and reinvent their social meaning.
One technique for innovating a solution to an identified problem is to actually attempt an experiment with many possible solutions. This technique was famously used by Thomas Edison's laboratory to find a version of the incandescent light bulb economically viable for home use, which involved searching through thousands of possible filament designs before settling on carbonized bamboo.
This technique is sometimes used in pharmaceutical drug discovery. Thousands of chemical compounds are subjected to high-throughput screening to see if they have any activity against a target molecule which has been identified as biologically significant to a disease. Promising compounds can then be studied; modified to improve efficacy and reduce side effects, evaluated for cost of manufacture; and if successful turned into treatments.
The related technique of A/B testing is often used to help optimize the design of web sites and mobile apps. This is used by major sites such as amazon.com, Facebook, Google, and Netflix. Procter & Gamble uses computer-simulated products and online user panels to conduct larger numbers of experiments to guide the design, packaging, and shelf placement of consumer products. Capital One uses this technique to drive credit card marketing offers.
Goals and failures of innovation
Scholars have argued that the main purpose for innovation today is profit maximization and capital valorisation. Consequently, programs of organizational innovation are typically tightly linked to organizational goals and growth objectives, to the business plan, and to market competitive positioning. Davila et al. (2006) note, "Companies cannot grow through cost reduction and reengineering alone... Innovation is the key element in providing aggressive top-line growth, and for increasing bottom-line results". One survey across a large number of manufacturing and services organizations found that systematic programs of organizational innovation are most frequently driven by: improved quality, creation of new markets, extension of the product range, reduced labor costs, improved production processes, reduced materials cost, reduced environmental damage, replacement of products/services, reduced energy consumption, and conformance to regulations.
Different goals are appropriate for different products, processes, and services. According to Andrea Vaona and Mario Pianta, some example goals of innovation could stem from two different types of technological strategies: technological competitiveness and active price competitiveness. Technological competitiveness may have a tendency to be pursued by smaller firms and can be characterized as "efforts for market-oriented innovation, such as a strategy of market expansion and patenting activity." On the other hand, active price competitiveness is geared toward process innovations that lead to efficiency and flexibility, which tend to be pursued by large, established firms as they seek to expand their market foothold. Whether innovation goals are successfully achieved or otherwise depends greatly on the environment prevailing in the organization.
Organization-internal innovation failures
Failure of organizational innovation programs has been widely researched and the causes vary considerably. Some causes are external to the organization and outside its influence of control. Others are internal and ultimately within the control of the organization. Internal causes of failure can be divided into causes associated with the cultural infrastructure and causes associated with the innovation process itself. David O'Sullivan wrote that causes of failure within the innovation process in most organizations can be distilled into five types: poor goal definition, poor alignment of actions to goals, poor participation in teams, poor monitoring of results, and poor communication and access to information.
Environmental and social innovation failures
Innovation is generally framed as an inherently positive force, delivering growth and prosperity for all, and is often deemed as both inevitable and unstoppable. In this sense, future innovations are often hailed as solutions to current problems, such as climate change. This business-as-usual approach would mean continued and increased globalization as well as quick innovation cycles which supposedly will maximize the competitiveness of processes, in the end leading to Eco-economic decoupling or Green growth. Yet, it is unclear whether innovative solutions will be capable of solving the climate crisis: According to Mario Giampietro and Silvio Funtowicz (2020), this positive framing of innovation "demonstrates [a] lack of understanding of the biophysical roots of the economic process and the seriousness of the sustainability crisis". This is due to the fact that innovation can be understood in its specific historic and cultural context: The prevailing hegemonic view on innovation, as emphasized by Ben Robra et al. (2023), aligns closely with capitalist mode of production, shown by the mantra of 'innovate or die.' From this viewpoint, innovation is primarily driven by the imperative of capital accumulation, serving the sole purpose of increasing returns, neglecting societal needs such as a clean environment or social equality and in general the biophysical limits of our planet.
Diffusion
Diffusion of innovation research was first started in 1903 by seminal researcher Gabriel Tarde, who first plotted the S-shaped diffusion curve. Tarde defined the innovation-decision process as a series of steps that include:
knowledge
forming an attitude
a decision to adopt or reject
implementation and use
confirmation of the decision
Once innovation occurs, innovations may be spread from the innovator to other individuals and groups. This process has been proposed that the lifecycle of innovations can be described using the 's-curve' or diffusion curve. The s-curve maps growth of revenue or productivity against time. In the early stage of a particular innovation, growth is relatively slow as the new product establishes itself. At some point, customers begin to demand and the product growth increases more rapidly. New incremental innovations or changes to the product allow growth to continue. Towards the end of its lifecycle, growth slows and may even begin to decline. In the later stages, no amount of new investment in that product will yield a normal rate of return.
The s-curve derives from an assumption that new products are likely to have "product life" – i.e., a start-up phase, a rapid increase in revenue and eventual decline. In fact, the great majority of innovations never get off the bottom of the curve, and never produce normal returns.
Innovative companies will typically be working on new innovations that will eventually replace older ones. Successive s-curves will come along to replace older ones and continue to drive growth upwards. In the figure above the first curve shows a current technology. The second shows an emerging technology that currently yields lower growth but will eventually overtake current technology and lead to even greater levels of growth. The length of life will depend on many factors.
Measuring innovation
Measuring innovation is inherently difficult as it implies commensurability so that comparisons can be made in quantitative terms. Innovation, however, is by definition novelty. Comparisons are thus often meaningless across products or service. Nevertheless, Edison et al. in their review of literature on innovation management found 232 innovation metrics. They categorized these measures along five dimensions; i.e. inputs to the innovation process, output from the innovation process, effect of the innovation output, measures to access the activities in an innovation process and availability of factors that facilitate such a process.
There are two different types of measures for innovation: the organizational level and the political level.
Organizational-level
The measure of innovation at the organizational level relates to individuals, team-level assessments, and private companies from the smallest to the largest company. Measure of innovation for organizations can be conducted by surveys, workshops, consultants, or internal benchmarking. There is today no established general way to measure organizational innovation. Corporate measurements are generally structured around balanced scorecards which cover several aspects of innovation such as business measures related to finances, innovation process efficiency, employees' contribution and motivation, as well benefits for customers. Measured values will vary widely between businesses, covering for example new product revenue, spending in R&D, time to market, customer and employee perception & satisfaction, number of patents, additional sales resulting from past innovations.
Political-level
For the political level, measures of innovation are more focused on a country or region competitive advantage through innovation. In this context, organizational capabilities can be evaluated through various evaluation frameworks, such as those of the European Foundation for Quality Management. The OECD Oslo Manual (1992) suggests standard guidelines on measuring technological product and process innovation. Some people consider the Oslo Manual complementary to the Frascati Manual from 1963. The new Oslo Manual from 2018 takes a wider perspective to innovation, and includes marketing and organizational innovation. These standards are used for example in the European Community Innovation Surveys.
Other ways of measuring innovation have traditionally been expenditure, for example, investment in R&D (Research and Development) as percentage of GNP (Gross National Product). Whether this is a good measurement of innovation has been widely discussed and the Oslo Manual has incorporated some of the critique against earlier methods of measuring. The traditional methods of measuring still inform many policy decisions. The EU Lisbon Strategy has set as a goal that their average expenditure on R&D should be 3% of GDP.
Indicators
Many scholars claim that there is a great bias towards the "science and technology mode" (S&T-mode or STI-mode), while the "learning by doing, using and interacting mode" (DUI-mode) is ignored and measurements and research about it rarely done. For example, an institution may be high tech with the latest equipment, but lacks crucial doing, using and interacting tasks important for innovation.
A common industry view (unsupported by empirical evidence) is that comparative cost-effectiveness research is a form of price control which reduces returns to industry, and thus limits R&D expenditure, stifles future innovation and compromises new products access to markets.
Some academics claim cost-effectiveness research is a valuable value-based measure of innovation which accords "truly significant" therapeutic advances (i.e. providing "health gain") higher prices than free market mechanisms. Such value-based pricing has been viewed as a means of indicating to industry the type of innovation that should be rewarded from the public purse.
An Australian academic developed the case that national comparative cost-effectiveness analysis systems should be viewed as measuring "health innovation" as an evidence-based policy concept for valuing innovation distinct from valuing through competitive markets, a method which requires strong anti-trust laws to be effective, on the basis that both methods of assessing pharmaceutical innovations are mentioned in annex 2C.1 of the Australia-United States Free Trade Agreement.
Indices
Several indices attempt to measure innovation and rank entities based on these measures, such as:
Bloomberg Innovation Index
"Bogota Manual" similar to the Oslo Manual, is focused on Latin America and the Caribbean countries.
"Creative Class" developed by Richard Florida
EIU Innovation Ranking
Global Competitiveness Report
Global Innovation Index (GII), by INSEAD
Information Technology and Innovation Foundation (ITIF) Index
Innovation 360 – From the World Bank. Aggregates innovation indicators (and more) from a number of different public sources
Innovation Capacity Index (ICI) published by a large number of international professors working in a collaborative fashion. The top scorers of ICI 2009–2010 were: 1. Sweden 82.2; 2. Finland 77.8; and 3. United States 77.5
Innovation Index, developed by the Indiana Business Research Center, to measure innovation capacity at the county or regional level in the United States
Innovation Union Scoreboard, developed by the European Union
innovationsindikator for Germany, developed by the Federation of German Industries (Bundesverband der Deutschen Industrie) in 2005
INSEAD Innovation Efficacy Index
International Innovation Index, produced jointly by The Boston Consulting Group, the National Association of Manufacturers (NAM) and its nonpartisan research affiliate The Manufacturing Institute, is a worldwide index measuring the level of innovation in a country; NAM describes it as the "largest and most comprehensive global index of its kind"
Management Innovation Index – Model for Managing Intangibility of Organizational Creativity: Management Innovation Index
NYCEDC Innovation Index, by the New York City Economic Development Corporation, tracks New York City's "transformation into a center for high-tech innovation. It measures innovation in the City's growing science and technology industries and is designed to capture the effect of innovation on the City's economy"
OECD Oslo Manual is focused on North America, Europe, and other rich economies
State Technology and Science Index, developed by the Milken Institute, is a U.S.-wide benchmark to measure the science and technology capabilities that furnish high paying jobs based around key components
World Competitiveness Scoreboard
Rankings
Common areas of focus include: high-tech companies, manufacturing, patents, post secondary education, research and development, and research personnel. The left ranking of the top 10 countries below is based on the 2020 Bloomberg Innovation Index. However, studies may vary widely; for example the Global Innovation Index 2016 ranks Switzerland as number one wherein countries like South Korea, Japan, and China do not even make the top ten.
Rate of innovation
In 2005 Jonathan Huebner, a physicist working at the Pentagon's Naval Air Warfare Center, argued on the basis of both U.S. patents and world technological breakthroughs, per capita, that the rate of human technological innovation peaked in 1873 and has been slowing ever since. In his article, he asked "Will the level of technology reach a maximum and then decline as in the Dark Ages?" In later comments to New Scientist magazine, Huebner clarified that while he believed that we will reach a rate of innovation in 2024 equivalent to that of the Dark Ages, he was not predicting the reoccurrence of the Dark Ages themselves.
John Smart criticized the claim and asserted that technological singularity researcher Ray Kurzweil and others showed a "clear trend of acceleration, not deceleration" when it came to innovations. The foundation replied to Huebner the journal his article was published in, citing Second Life and eHarmony as proof of accelerating innovation; to which Huebner replied.
However, Huebner's findings were confirmed in 2010 with U.S. Patent Office data. and in a 2012 paper.
Innovation and development
The theme of innovation as a tool to disrupting patterns of poverty has gained momentum since the mid-2000s among major international development actors such as DFID, Gates Foundation's use of the Grand Challenge funding model, and USAID's Global Development Lab. Networks have been established to support innovation in development, such as D-Lab at MIT. Investment funds have been established to identify and catalyze innovations in developing countries, such as DFID's Global Innovation Fund, Human Development Innovation Fund, and (in partnership with USAID) the Global Development Innovation Ventures.
The United States has to continue to play on the same level of playing field as its competitors in federal research. This can be achieved being strategically innovative through investment in basic research and science".
Government policies
Given its effects on efficiency, quality of life, and productive growth, innovation is a key driver in improving society and economy. Consequently, policymakers have worked to develop environments that will foster innovation, from funding research and development to establishing regulations that do not inhibit innovation, funding the development of innovation clusters, and using public purchasing and standardisation to 'pull' innovation through.
For instance, experts are advocating that the U.S. federal government launch a National Infrastructure Foundation, a nimble, collaborative strategic intervention organization that will house innovations programs from fragmented silos under one entity, inform federal officials on innovation performance metrics, strengthen industry-university partnerships, and support innovation economic development initiatives, especially to strengthen regional clusters. Because clusters are the geographic incubators of innovative products and processes, a cluster development grant program would also be targeted for implementation. By focusing on innovating in such areas as precision manufacturing, information technology, and clean energy, other areas of national concern would be tackled including government debt, carbon footprint, and oil dependence. The U.S. Economic Development Administration understand this reality in their continued Regional Innovation Clusters initiative. The United States also has to integrate her supply-chain and improve her applies research capability and downstream process innovation.
Many countries recognize the importance of innovation including Japan's Ministry of Education, Culture, Sports, Science and Technology (MEXT); Germany's Federal Ministry of Education and Research; and the Ministry of Science and Technology in the People's Republic of China. Russia's innovation programme is the Medvedev modernisation programme which aims to create a diversified economy based on high technology and innovation. The Government of Western Australia has established a number of innovation incentives for government departments. Landgate was the first Western Australian government agency to establish its Innovation Program.
Some regions have taken a proactive role in supporting innovation. Many regional governments are setting up innovation agencies to strengthen regional capabilities. Business incubators were first introduced in 1959 and subsequently nurtured by governments around the world. Such "incubators", located close to knowledge clusters (mostly research-based) like universities or other government excellence centres – aim primarily to channel generated knowledge to applied innovation outcomes in order to stimulate regional or national economic growth.
In 2009, the municipality of Medellin, Colombia created Ruta N to transform the city into a knowledge city.
Counter-hegemonic views on innovation
Innovation in the prevailing hegemonic view today mostly refers to 'innovation under capital', due to the prevailing capitalist nature of the global economy. In contrast, Robra et al. (2023) propose a counter-hegemonic view on innovation. This alternative lens revises the centrality of capital accumulation as the primary goal of innovation. Instead of being solely driven by profit motives, a counter-hegemonic understanding sees innovation as a means to create user-value, with a focus on satisfying societal needs. This view on innovation is underpinned by open access to knowledge, adaptability, repairability, and maintenance of products as well as Eco-sufficiency, defining progress not by efficiency but by staying within planetary boundaries, thereby challenging the hegemonic belief in limitless growth. This perspective is exemplified by commons-based peer production (CBPP), offering an alternative vision of innovation that prioritizes conviviality over relentless competition. In essence, this counter-hegemonic view describes a more socially and ecologically conscious approach to innovation, striving for a balance between technological progress and human wellbeing.
See also
Communities of innovation
Creative problem solving
Diffusion (anthropology)
Ecoinnovation
Hype cycle
Induced innovation
Information revolution
Innovation leadership
Innovation system
International Association of Innovation Professionals
ISO 56000
Knowledge economy
Obsolescence
Open Innovation
Open Innovations (Forum and Technology Show)
Outcome-Driven Innovation
Participatory design
Product innovation
Pro-innovation bias
Sustainable Development Goals (Agenda 9)
Technology Life Cycle
Technological innovation system
Theories of technology
Timeline of historic inventions
Toolkits for User Innovation
UNDP Innovation Facility
User Innovation
Virtual product development
References
Further reading
Bloom, Nicholas, Charles I. Jones, John Van Reenen, and Michael Webb. 2020. "Are Ideas Getting Harder to Find?", American Economic Review, 110 (4): 1104–44.
Śledzik, K., Szmelter-Jarosz, A., Schmidt, E. K., Bielawski, K., & Declich, A. (2023). Are Schumpeter’s Innovations Responsible? A Reflection on the Concept of Responsible (Research and) Innovation from a Neo-Schumpeterian Perspective. Journal of the Knowledge Economy, 14(4), 5065-5085.
Design
Innovation economics
Product management
Science and technology studies | Innovation | [
"Technology",
"Engineering"
] | 7,593 | [
"Design",
"Science and technology studies"
] |
118,570 | https://en.wikipedia.org/wiki/Magnetar | A magnetar is a type of neutron star with an extremely powerful magnetic field (~109 to 1011 T, ~1013 to 1015 G). The magnetic-field decay powers the emission of high-energy electromagnetic radiation, particularly X-rays and gamma rays.
The existence of magnetars was proposed in 1992 by Robert Duncan and Christopher Thompson. Their proposal sought to explain the properties of transient sources of gamma rays, now known as soft gamma repeaters (SGRs). Over the following decade, the magnetar hypothesis became widely accepted, and was extended to explain anomalous X-ray pulsars (AXPs). , 24 magnetars have been confirmed.
It has been suggested that magnetars are the source of fast radio bursts (FRB), in particular as a result of findings in 2020 by scientists using the Australian Square Kilometre Array Pathfinder (ASKAP) radio telescope.
Description
Like other neutron stars, magnetars are around in diameter, and have a mass of about 1.4 solar masses. They are formed by the collapse of a star with a mass 10–25 times that of the Sun. The density of the interior of a magnetar is such that a tablespoon of its substance would have a mass of over 100 million tons. Magnetars are differentiated from other neutron stars by having even stronger magnetic fields, and by rotating more slowly in comparison. Most observed magnetars rotate once every two to ten seconds, whereas typical neutron stars, observed as radio pulsars, rotate one to ten times per second. A magnetar's magnetic field gives rise to very strong and characteristic bursts of X-rays and gamma rays. The active life of a magnetar is short compared to other celestial bodies. Their strong magnetic fields decay after about 10,000 years, after which activity and strong X-ray emission cease. Given the number of magnetars observable today, one estimate puts the number of inactive magnetars in the Milky Way at 30 million or more.
Starquakes triggered on the surface of the magnetar disturb the magnetic field which encompasses it, often leading to extremely powerful gamma-ray flare emissions which have been recorded on Earth in 1979, 1998 and 2004.
Magnetic field
Magnetars are characterized by their extremely powerful magnetic fields of ~109 to 1011 T. These magnetic fields are a hundred million times stronger than any man-made magnet, and about a trillion times more powerful than the field surrounding Earth. Earth has a geomagnetic field of 30–60 microteslas, and a neodymium-based, rare-earth magnet has a field of about 1.25 tesla, with a magnetic energy density of 4.0 × 105 J/m3. A magnetar's 1010 tesla field, by contrast, has an energy density of , with an E/c2 mass density more than 10,000 times that of lead. The magnetic field of a magnetar would be lethal even at a distance of 1,000 km due to the strong magnetic field distorting the electron clouds of the subject's constituent atoms, rendering the chemistry of sustaining life impossible. At a distance of halfway from Earth to the moon, an average distance between the Earth and the Moon being , a magnetar could wipe information from the magnetic stripes of all credit cards on Earth. , they are the most powerful magnetic objects detected throughout the universe.
As described in the February 2003 Scientific American cover story, remarkable things happen within a magnetic field of magnetar strength. "X-ray photons readily split in two or merge. The vacuum itself is polarized, becoming strongly birefringent, like a calcite crystal. Atoms are deformed into long cylinders thinner than the quantum-relativistic de Broglie wavelength of an electron." In a field of about 105 teslas atomic orbitals deform into rod shapes. At 1010 teslas, a hydrogen atom becomes 200 times as narrow as its normal diameter.
Origins of magnetic fields
The dominant model of the strong fields of magnetars is that it results from a magnetohydrodynamic dynamo process in the turbulent, extremely dense conducting fluid that exists before the neutron star settles into its equilibrium configuration. These fields then persist due to persistent currents in a proton-superconductor phase of matter that exists at an intermediate depth within the neutron star (where neutrons predominate by mass). A similar magnetohydrodynamic dynamo process produces even more intense transient fields during coalescence of pairs of neutron stars. An alternative model is that they simply result from the collapse of stars with unusually strong magnetic fields.
Formation
In a supernova, a star collapses to a neutron star, and its magnetic field increases dramatically in strength through conservation of magnetic flux. Halving a linear dimension increases the magnetic field strength fourfold. Duncan and Thompson calculated that when the spin, temperature and magnetic field of a newly formed neutron star falls into the right ranges, a dynamo mechanism could act, converting heat and rotational energy into magnetic energy and increasing the magnetic field, normally an already enormous 108 teslas, to more than 1011 teslas (or 1015 gauss). The result is a magnetar. It is estimated that about one in ten supernova explosions results in a magnetar rather than a more standard neutron star or pulsar.
1979 discovery
On March 5, 1979, a few months after the successful dropping of landers into the atmosphere of Venus, the two uncrewed Soviet spaceprobes Venera 11 and 12, then in heliocentric orbit, were hit by a blast of gamma radiation at approximately 10:51 EST. This contact raised the radiation readings on both the probes from a normal 100 counts per second to over 200,000 counts a second in only a fraction of a millisecond.
Eleven seconds later, Helios 2, a NASA probe, itself in orbit around the Sun, was saturated by the blast of radiation. It soon hit Venus, where the Pioneer Venus Orbiter's detectors were overcome by the wave. Shortly thereafter the gamma rays inundated the detectors of three U.S. Department of Defense Vela satellites, the Soviet Prognoz 7 satellite, and the Einstein Observatory, all orbiting Earth. Before exiting the solar system the radiation was detected by the International Sun–Earth Explorer in halo orbit.
This was the strongest wave of extra-solar gamma rays ever detected at over 100 times as intense as any previously known burst. Given the speed of light and its detection by several widely dispersed spacecraft, the source of the gamma radiation could be triangulated to within an accuracy of approximately 2 arcseconds. The direction of the source corresponded with the remnants of a star that had gone supernova around 3000 BCE. It was in the Large Magellanic Cloud and the source was named SGR 0525-66; the event itself was named GRB 790305b, the first-observed SGR megaflare.
Recent discoveries
On February 21, 2008, it was announced that NASA and researchers at McGill University had discovered a neutron star with the properties of a radio pulsar which emitted some magnetically powered bursts, like a magnetar. This suggests that magnetars are not merely a rare type of pulsar but may be a (possibly reversible) phase in the lives of some pulsars. On September 24, 2008, ESO announced what it ascertained was the first optically active magnetar-candidate yet discovered, using ESO's Very Large Telescope. The newly discovered object was designated SWIFT J195509+261406. On September 1, 2014, ESA released news of a magnetar close to supernova remnant Kesteven 79. Astronomers from Europe and China discovered this magnetar, named 3XMM J185246.6+003317, in 2013 by looking at images that had been taken in 2008 and 2009. In 2013, a magnetar PSR J1745−2900 was discovered, which orbits the black hole in the Sagittarius A* system. This object provides a valuable tool for studying the ionized interstellar medium toward the Galactic Center. In 2018, the temporary result of the merger of two neutron stars was determined to be a hypermassive magnetar, which shortly collapsed into a black hole.
In April 2020, a possible link between fast radio bursts (FRBs) and magnetars was suggested, based on observations of SGR 1935+2154, a likely magnetar located in the Milky Way galaxy.
Known magnetars
, 24 magnetars are known, with six more candidates awaiting confirmation. A full listing is given in the McGill SGR/AXP Online Catalog. Examples of known magnetars include:
SGR 0525−66, in the Large Magellanic Cloud, located about 163,000 light-years from Earth, the first found (in 1979)
SGR 1806−20, located 50,000 light-years from Earth on the far side of the Milky Way in the constellation of Sagittarius and the most magnetized object known.
SGR 1900+14, located 20,000 light-years away in the constellation Aquila. After a long period of low emissions (significant bursts only in 1979 and 1993) it became active in May–August 1998, and a burst detected on August 27, 1998, was of sufficient power to force NEAR Shoemaker to shut down to prevent damage and to saturate instruments on BeppoSAX, WIND and RXTE. On May 29, 2008, NASA's Spitzer Space Telescope discovered a ring of matter around this magnetar. It is thought that this ring formed in the 1998 burst.
SGR 0501+4516 was discovered on 22 August 2008.
1E 1048.1−5937, located 9,000 light-years away in the constellation Carina. The original star, from which the magnetar formed, had a mass 30 to 40 times that of the Sun.
, ESO reports identification of an object which it has initially identified as a magnetar, SWIFT J195509+261406, originally identified by a gamma-ray burst (GRB 070610).
CXO J164710.2-455216, located in the massive galactic cluster Westerlund 1, which formed from a star with a mass in excess of 40 solar masses.
SWIFT J1822.3 Star-1606 discovered on 14 July 2011 by Italian and Spanish researchers of CSIC at Madrid and Catalonia. This magnetar contrary to previsions has a low external magnetic field, and it might be as young as half a million years.
3XMM J185246.6+003317, discovered by international team of astronomers, looking at data from ESA's XMM-Newton X-ray telescope.
SGR 1935+2154, emitted a pair of luminous radio bursts on 28 April 2020. There was speculation that these may be galactic examples of fast radio bursts.
Swift J1818.0-1607, X-ray burst detected March 2020, is one of five known magnetars that are also radio pulsars. By its time of discovery, it may be only 240 years old.
Bright supernovae
Unusually bright supernovae are thought to result from the death of very large stars as pair-instability supernovae (or pulsational pair-instability supernovae). However, recent research by astronomers has postulated that energy released from newly formed magnetars into the surrounding supernova remnants may be responsible for some of the brightest supernovae, such as SN 2005ap and SN 2008es.
References
Specific
Books and literature
General
External links
McGill Online Magnetar Catalog McGill Online Magnetar Catalog -- Main Table
Star types
Stellar phenomena | Magnetar | [
"Physics",
"Astronomy"
] | 2,406 | [
"Physical phenomena",
"Magnetars",
"Astronomical classification systems",
"Magnetism in astronomy",
"Stellar phenomena",
"Star types"
] |
3,511,408 | https://en.wikipedia.org/wiki/%C5%98e%C5%BE | Řež () is a village and administrative part of Husinec in the Central Bohemian Region of the Czech Republic.
Řež is the site of a nuclear research centre and a chemical factory. In August 2002 there was a serious flood which damaged the site.
Řež has a railway connection by Prague - Kralupy nad Vltavou line. The stop is located on the opposite (left) bank of the Vltava River and is accessible by a pedestrian bridge.
On 19 June 2022 the highest ever temperature during the month of June in the Czech Republic was recorded here at 39.0 °C.
Further reading
1995. 40 Years on: Rez Institute Underpins Czech Programme. "Nuclear Engineering International". no. 491: 46.
References
External links
Official website of Husinec
Villages in Prague-East District | Řež | [
"Physics"
] | 170 | [
"Nuclear and atomic physics stubs",
"Nuclear physics"
] |
3,512,034 | https://en.wikipedia.org/wiki/Angiogenin | Angiogenin (ANG) also known as ribonuclease 5 is a small 123 amino acid protein that in humans is encoded by the ANG gene. Angiogenin is a potent stimulator of new blood vessels through the process of angiogenesis. Ang hydrolyzes cellular RNA, resulting in modulated levels of protein synthesis and interacts with DNA causing a promoter-like increase in the expression of rRNA. Ang is associated with cancer and neurological disease through angiogenesis and through activating gene expression that suppresses apoptosis.
Function
Angiogenin is a key protein implicated in angiogenesis in normal and tumor growth. Angiogenin interacts with endothelial and smooth muscle cells resulting in cell migration, invasion, proliferation and formation of tubular structures. Ang binds to actin of both smooth muscle and endothelial cells to form complexes that activate proteolytic cascades which upregulate the production of proteases and plasmin that degrade the laminin and fibronectin layers of the basement membrane. Degradation of the basement membrane and extracellular matrix allows the endothelial cells to penetrate and migrate into the perivascular tissue. Signal transduction pathways activated by Ang interactions at the cellular membrane of endothelial cells produce extracellular signal-related kinase1/2 (ERK1/2) and protein kinase B/Akt. Activation of these proteins leads to invasion of the basement membrane and cell proliferation associated with further angiogenesis. The most important step in the angiogenesis process is the translocation of Ang to the cell nucleus. Once Ang has been translocated to the nucleus, it enhances rRNA transcription by binding to the CT-rich (CTCTCTCTCTCTCTCTCCCTC) angiogenin binding element (ABE) within the upstream intergenic region of rDNA, which subsequently activates other angiogenic factors that induce angiogenesis.
However, angiogenin is unique among the many proteins that are involved in angiogenesis in that it is also an enzyme with an amino acid sequence 33% identical to that of bovine pancreatic ribonuclease (RNase A). Ang has the same general catalytic properties as RNase A, it cleaves preferentially on the 3' side of pyrimidines and follows a transphosphorylation/hydrolysis mechanism. Although angiogenin contains many of the same catalytic residues as RNase A, it cleaves standard RNA substrates 105–106 times less efficiently than RNase A. The reason for this inefficiency is due to the 117 residue consisting of a glutamine, which blocks the catalytic site. Removal of this residue through mutation increases the ribonuclease activity between 11 and 30 fold. Despite this apparent weakness, the enzymatic activity of Ang appears to be essential for biological activity: replacements of important catalytic site residues (histidine-13 and histidine-114) invariably diminish both the ribonuclease activity toward tRNA by 10,000 fold and almost abolishes angiogenesis activities completely.
Disease
Cancer
Ang has a prominent role in the pathology of cancer due to its functions in angiogenesis and cell survival. Since Ang possesses angiogenic activity, it makes Ang a possible candidate in therapeutic treatments of cancer. Studies of Ang and tumor relationships provide evidence for a connection between the two. The translocation of Ang to the nucleus causes an upregulation of transcriptional rRNA, while knockdown strains of Ang cause downregulation. The presence of Ang inhibitors that block translocation resulted in a decrease of tumor growth and overall angiogenesis. HeLa cells translocate Ang to the nucleus independent of cell density. In human umbilical vein endothelial cells (HUVECs), translocation of Ang to the nucleus stops after cells reach a specific density, while in HeLa cells translocation continued past that point. Inhibition of Ang affects the ability of HeLa cells to proliferate, which proposes an effective target for possible therapies.
Neurodegenerative diseases
Due to the ability of Ang to protect motoneurons (MNs), causal links between Ang mutations and amyotrophic lateral sclerosis (ALS) are likely. The angiogenic factors associated with Ang may protect the central nervous system and MNs directly. Experiments with wild type Ang found that it slows MN degeneration in mice that had developed ALS, providing evidence for further development of Ang protein therapy in ALS treatment. Angiogenin expression in Parkinson's disease is dramatically decreased in the presence of alpha-synuclein (α-syn) aggregations. Exogenous angiogenin applied to dopamine-producing cells leads to the phosphorylation of PKB/AKT and the activation of this complex inhibits cleavage of caspase 3 and apoptosis when cells are exposed to a Parkinson's-like inducing substance.
Gene
Alternative splicing results in two transcript variants encoding the same protein. This gene and the gene that encodes ribonuclease, RNase A family, 4 share promoters and 5' exons. Each gene splices to a unique downstream exon that contains its complete coding region.
References
Further reading
External links
Biomolecules
EC 3.1.27 | Angiogenin | [
"Chemistry",
"Biology"
] | 1,119 | [
"Natural products",
"Organic compounds",
"Structural biology",
"Biomolecules",
"Biochemistry",
"Molecular biology"
] |
3,512,103 | https://en.wikipedia.org/wiki/Theory%20%28mathematical%20logic%29 | In mathematical logic, a theory (also called a formal theory) is a set of sentences in a formal language. In most scenarios a deductive system is first understood from context, after which an element of a deductively closed theory is then called a theorem of the theory. In many deductive systems there is usually a subset that is called "the set of axioms" of the theory , in which case the deductive system is also called an "axiomatic system". By definition, every axiom is automatically a theorem. A first-order theory is a set of first-order sentences (theorems) recursively obtained by the inference rules of the system applied to the set of axioms.
General theories (as expressed in formal language)
When defining theories for foundational purposes, additional care must be taken, as normal set-theoretic language may not be appropriate.
The construction of a theory begins by specifying a definite non-empty conceptual class , the elements of which are called statements. These initial statements are often called the primitive elements or elementary statements of the theory—to distinguish them from other statements that may be derived from them.
A theory is a conceptual class consisting of certain of these elementary statements. The elementary statements that belong to are called the elementary theorems of and are said to be true. In this way, a theory can be seen as a way of designating a subset of that only contain statements that are true.
This general way of designating a theory stipulates that the truth of any of its elementary statements is not known without reference to . Thus the same elementary statement may be true with respect to one theory but false with respect to another. This is reminiscent of the case in ordinary language where statements such as "He is an honest person" cannot be judged true or false without interpreting who "he" is, and, for that matter, what an "honest person" is under this theory.
Subtheories and extensions
A theory is a subtheory of a theory if is a subset of . If is a subset of then is called an extension or a supertheory of
Deductive theories
A theory is said to be a deductive theory if is an inductive class, which is to say that its content is based on some formal deductive system and that some of its elementary statements are taken as axioms. In a deductive theory, any sentence that is a logical consequence of one or more of the axioms is also a sentence of that theory. More formally, if is a Tarski-style consequence relation, then is closed under (and so each of its theorems is a logical consequence of its axioms) if and only if, for all sentences in the language of the theory , if , then ; or, equivalently, if is a finite subset of (possibly the set of axioms of in the case of finitely axiomatizable theories) and , then , and therefore .
Consistency and completeness
A syntactically consistent theory is a theory from which not every sentence in the underlying language can be proven (with respect to some deductive system, which is usually clear from context). In a deductive system (such as first-order logic) that satisfies the principle of explosion, this is equivalent to requiring that there is no sentence φ such that both φ and its negation can be proven from the theory.
A satisfiable theory is a theory that has a model. This means there is a structure M that satisfies every sentence in the theory. Any satisfiable theory is syntactically consistent, because the structure satisfying the theory will satisfy exactly one of φ and the negation of φ, for each sentence φ.
A consistent theory is sometimes defined to be a syntactically consistent theory, and sometimes defined to be a satisfiable theory. For first-order logic, the most important case, it follows from the completeness theorem that the two meanings coincide. In other logics, such as second-order logic, there are syntactically consistent theories that are not satisfiable, such as ω-inconsistent theories.
A complete consistent theory (or just a complete theory) is a consistent theory such that for every sentence φ in its language, either φ is provable from or {φ} is inconsistent. For theories closed under logical consequence, this means that for every sentence φ, either φ or its negation is contained in the theory. An incomplete theory is a consistent theory that is not complete.
(see also ω-consistent theory for a stronger notion of consistency.)
Interpretation of a theory
An interpretation of a theory is the relationship between a theory and some subject matter when there is a many-to-one correspondence between certain elementary statements of the theory, and certain statements related to the subject matter. If every elementary statement in the theory has a correspondent it is called a full interpretation, otherwise it is called a partial interpretation.
Theories associated with a structure
Each structure has several associated theories. The complete theory of a structure A is the set of all first-order sentences over the signature of A that are satisfied by A. It is denoted by Th(A). More generally, the theory of K, a class of σ-structures, is the set of all first-order σ-sentences that are satisfied by all structures in K, and is denoted by Th(K). Clearly Th(A) = Th({A}). These notions can also be defined with respect to other logics.
For each σ-structure A, there are several associated theories in a larger signature σ' that extends σ by adding one new constant symbol for each element of the domain of A. (If the new constant symbols are identified with the elements of A that they represent, σ' can be taken to be σ A.) The cardinality of σ' is thus the larger of the cardinality of σ and the cardinality of A.
The diagram of A consists of all atomic or negated atomic σ'-sentences that are satisfied by A and is denoted by diagA. The positive diagram of A is the set of all atomic σ'-sentences that A satisfies. It is denoted by diag+A. The elementary diagram of A is the set eldiagA of all first-order σ'-sentences that are satisfied by A or, equivalently, the complete (first-order) theory of the natural expansion of A to the signature σ'.
First-order theories
A first-order theory is a set of sentences in a first-order formal language .
Derivation in a first-order theory
There are many formal derivation ("proof") systems for first-order logic. These include Hilbert-style deductive systems, natural deduction, the sequent calculus, the tableaux method and resolution.
Syntactic consequence in a first-order theory
A formula A is a syntactic consequence of a first-order theory if there is a derivation of A using only formulas in as non-logical axioms. Such a formula A is also called a theorem of . The notation "" indicates A is a theorem of .
Interpretation of a first-order theory
An interpretation of a first-order theory provides a semantics for the formulas of the theory. An interpretation is said to satisfy a formula if the formula is true according to the interpretation. A model of a first-order theory is an interpretation in which every formula of is satisfied.
First-order theories with identity
A first-order theory is a first-order theory with identity if includes the identity relation symbol "=" and the reflexivity and substitution axiom schemes for this symbol.
Topics related to first-order theories
Compactness theorem
Consistent set
Deduction theorem
Enumeration theorem
Lindenbaum's lemma
Löwenheim–Skolem theorem
Examples
One way to specify a theory is to define a set of axioms in a particular language. The theory can be taken to include just those axioms, or their logical or provable consequences, as desired. Theories obtained this way include ZFC and Peano arithmetic.
A second way to specify a theory is to begin with a structure, and let the theory be the set of sentences that are satisfied by the structure. This is a method for producing complete theories through the semantic route, with examples including the set of true sentences under the structure (N, +, ×, 0, 1, =), where N is the set of natural numbers, and the set of true sentences under the structure (R, +, ×, 0, 1, =), where R is the set of real numbers. The first of these, called the theory of true arithmetic, cannot be written as the set of logical consequences of any enumerable set of axioms.
The theory of (R, +, ×, 0, 1, =) was shown by Tarski to be decidable; it is the theory of real closed fields (see Decidability of first-order theories of the real numbers for more).
See also
Axiomatic system
Interpretability
List of first-order theories
Mathematical theory
References
Further reading
Logical expressions
fr:Théorie axiomatique | Theory (mathematical logic) | [
"Mathematics"
] | 1,873 | [
"Mathematical logic",
"Logical expressions"
] |
3,514,267 | https://en.wikipedia.org/wiki/Zinc%20telluride | Zinc telluride is a binary chemical compound with the formula ZnTe. This solid is a semiconductor material with a direct band gap of 2.26 eV. It is usually a p-type semiconductor. Its crystal structure is cubic, like that for sphalerite and diamond.
Properties
ZnTe has the appearance of grey or brownish-red powder, or ruby-red crystals when refined by sublimation. Zinc telluride typically has a cubic (sphalerite, or "zincblende") crystal structure, but can be also prepared as rocksalt crystals or in hexagonal crystals (wurtzite structure). Irradiated by a strong optical beam burns in presence of oxygen. Its lattice constant is 0.6101 nm, allowing it to be grown with or on aluminium antimonide, gallium antimonide, indium arsenide, and lead selenide. With some lattice mismatch, it can also be grown on other substrates such as GaAs, and it can be grown in thin-film polycrystalline (or nanocrystalline) form on substrates such as glass, for example, in the manufacture of thin-film solar cells. In the wurtzite (hexagonal) crystal structure, it has lattice parameters a = 0.427 and c = 0.699 nm.
Applications
Optoelectronics
Zinc telluride can be easily doped, and for this reason it is one of the more common semiconducting materials used in optoelectronics. ZnTe is important for development of various semiconductor devices, including blue LEDs, laser diodes, solar cells, and components of microwave generators. It can be used for solar cells, for example, as a back-surface field layer and p-type semiconductor material for a CdTe/ZnTe structure or in PIN diode structures.
The material can also be used as a component of ternary semiconductor compounds, such as CdxZn(1-x)Te (conceptually a mixture composed from the end-members ZnTe and CdTe), which can be made with a varying composition x to allow the optical bandgap to be tuned as desired.
Nonlinear optics
Zinc telluride together with lithium niobate is often used for generation of pulsed terahertz radiation in time-domain terahertz spectroscopy and terahertz imaging. When a crystal of such material is subjected to a high-intensity light pulse of subpicosecond duration, it emits a pulse of terahertz frequency through a nonlinear optical process called optical rectification. Conversely, subjecting a zinc telluride crystal to terahertz radiation causes it to show optical birefringence and change the polarization of a transmitting light, making it an electro-optic detector.
Vanadium-doped zinc telluride, "ZnTe:V", is a non-linear optical photorefractive material of possible use in the protection of sensors at visible wavelengths. ZnTe:V optical limiters are light and compact, without complicated optics of conventional limiters. ZnTe:V can block a high-intensity jamming beam from a laser dazzler, while still passing the lower-intensity image of the observed scene. It can also be used in holographic interferometry, in reconfigurable optical interconnections, and in laser optical phase conjugation devices. It offers superior photorefractive performance at wavelengths between 600 and 1300 nm, in comparison with other III-V and II-VI compound semiconductors. By adding manganese as an additional dopant (ZnTe:V:Mn), its photorefractive yield can be significantly increased.
References
External links
National Compound Semiconductor Roadmap (Office of Naval research) – Accessed April 2006
Tellurides
telluride
II-VI semiconductors
Terahertz technology
Nonlinear optical materials
Zincblende crystal structure | Zinc telluride | [
"Physics",
"Chemistry"
] | 821 | [
"Inorganic compounds",
"Spectrum (physical sciences)",
"Semiconductor materials",
"Electromagnetic spectrum",
"II-VI semiconductors",
"Terahertz technology"
] |
3,514,565 | https://en.wikipedia.org/wiki/Lead%20shielding | Lead shielding refers to the use of lead as a form of radiation protection to shield people or objects from radiation so as to reduce the effective dose. Lead can effectively attenuate certain kinds of radiation because of its high density and high atomic number; principally, it is effective at stopping gamma rays and x-rays.
Operation
Lead's high density is caused by the combination of its high atomic number and the relatively short bond lengths and atomic radius. The high atomic number means that more electrons are needed to maintain a neutral charge and the short bond length and a small atomic radius means that many atoms can be packed into a particular lead structure.
Because of lead's density and large number of electrons, it is well suited to scattering x-rays and gamma-rays. These rays form photons, a type of boson, which impart energy onto electrons when they come into contact. Without a lead shield, the electrons within a person's body would be affected, which could damage their DNA. When the radiation attempts to pass through lead, its electrons absorb and scatter the energy. Eventually though, the lead will degrade from the energy to which it is exposed. However, lead is not effective against all types of radiation. High energy electrons (including beta radiation) incident on lead may create bremsstrahlung radiation, which is potentially more dangerous to tissue than the original radiation. Furthermore, lead is not a particularly effective absorber of neutron radiation.
Types
Lead is used for shielding in x-ray machines, nuclear power plants, labs, medical facilities, military equipment, and other places where radiation may be encountered. There is great variety in the types of shielding available both to protect people and to shield equipment and experiments. In gamma-spectroscopy for example, lead castles are constructed to shield the probe from environmental radiation. Personal shielding includes lead aprons (such as the familiar garment used during dental x-rays), thyroid shields, and lead gloves. There are also a variety of shielding devices available for laboratory equipment, including lead castles, structures composed of lead bricks, and lead pigs, made of solid lead or lead-lined containers for storing and transporting radioactive samples. In many facilities where radiation is produced, regulations require construction with lead-lined plywood or drywall to protect adjoining rooms from scatter radiation.
Wear
A lead apron or leaded apron is a type of protective clothing that acts as a radiation shield. It is constructed of a thin rubber exterior and an interior of lead in the shape of a hospital apron. The purpose of the lead apron is to reduce exposure of a hospital patient to x-rays to vital organs that are potentially exposed to ionizing radiation during medical imaging that uses x-rays (radiography, fluoroscopy, computed tomography).
Protection of the reproductive organs with a lead rubber apron is considered important because DNA changes to sperm or egg cells of the patient may pass on genetic defects to the offspring of the patient, causing serious and unnecessary hardship for child and parents.
The thyroid gland is especially vulnerable to x-ray exposure. Care should be taken to place a lead apron over the thyroid gland before taking dental radiographs. Aprons used for dental imaging should include thyroid collars. However, in poorer or loosely regulated countries, possibly due to the cost of such equipment (approx. 40 USD), no such lead protection is given to the patients themselves, though the operators do get out of the x-ray room for their own safety.
The correct thickness of lead-equivalent (Pbeq) wear will depend on how long and how often the person is working in an exposed environment. The minimum requirement is to wear 0.25 mm Pbeq when not behind lead shielding. In a theatre using fluoroscopy (e.g. orthopaedics, cardiology or interventional radiology) 0.35 or 0.5 mm lead may be appropriate because of the higher KV employed, and on proximity to the primary beam.
See also
Instruments used in radiology
Radiation shielding
Nuclear safety
ALARA
Fallout shelter
Demron
Stopping power
References
External links
lead protection from radiation
Protective gear
Radiation protection
Nuclear physics
Nuclear safety and security
Radiobiology
Lead
Medical equipment | Lead shielding | [
"Physics",
"Chemistry",
"Biology"
] | 846 | [
"Radiobiology",
"Medical equipment",
"Nuclear physics",
"Radioactivity",
"Medical technology"
] |
3,514,745 | https://en.wikipedia.org/wiki/Gallium%20phosphide | Gallium phosphide (GaP), a phosphide of gallium, is a compound semiconductor material with an indirect band gap of 2.24eV at room temperature. Impure polycrystalline material has the appearance of pale orange or grayish pieces. Undoped single crystals are orange, but strongly doped wafers appear darker due to free-carrier absorption. It is odorless and insoluble in water.
GaP has a microhardness of 9450 N/mm2, a Debye temperature of , and a thermal expansion coefficient of 5.3 K−1 at room temperature. Sulfur, silicon or tellurium are used as dopants to produce n-type semiconductors. Zinc is used as a dopant for the p-type semiconductor.
Gallium phosphide has applications in optical systems. Its static dielectric constant is 11.1 at room temperature. Its refractive index varies between ~3.2 and 5.0 across the visible range, which is higher than in most other semiconducting materials. In its transparent range, its index is higher than almost any other transparent material, including gemstones such as diamond, or non-oxide lenses such as zinc sulfide.
Light-emitting diodes
Gallium phosphide has been used in the manufacture of low-cost red, orange, and green light-emitting diodes (LEDs) with low to medium brightness since the 1960s. It is used standalone or together with gallium arsenide phosphide.
Pure GaP LEDs emit green light at a wavelength of 555 nm. Nitrogen-doped GaP emits yellow-green (565 nm) light, zinc oxide doped GaP emits red (700 nm).
Gallium phosphide is transparent for yellow and red light, therefore GaAsP-on-GaP LEDs are more efficient than GaAsP-on-GaAs.
Crystal growth
At temperatures above ~900 °C, gallium phosphide dissociates and the phosphorus escapes as a gas. In crystal growth from a 1500 °C melt (for LED wafers), this must be prevented by holding the phosphorus in with a blanket of molten boric oxide in inert gas pressure of 10–100 atmospheres. The process is called liquid encapsulated Czochralski (LEC) growth, an elaboration of the Czochralski process used for silicon wafers.
References
Cited sources
External links
GaP. refractiveindex.info
Ioffe NSM data archive
III-V semiconductors
Optical materials
Gallium compounds
Phosphides
III-V compounds
Light-emitting diode materials
Zincblende crystal structure | Gallium phosphide | [
"Physics",
"Chemistry"
] | 558 | [
"Inorganic compounds",
"Semiconductor materials",
"Materials",
"Optical materials",
"III-V semiconductors",
"Light-emitting diode materials",
"III-V compounds",
"Matter"
] |
3,514,950 | https://en.wikipedia.org/wiki/Aluminium%20gallium%20nitride | Aluminium gallium nitride (AlGaN) is a semiconductor material. It is any alloy of aluminium nitride and gallium nitride.
The bandgap of AlxGa1−xN can be tailored from 4.3eV (xAl=0) to 6.2eV (xAl=1).
AlGaN is used to manufacture light-emitting diodes operating in blue to ultraviolet region, where wavelengths down to 250 nm (far UV) were achieved, and some reports down to 222 nm. It is also used in blue semiconductor lasers.
It is also used in detectors of ultraviolet radiation, and in AlGaN/GaN High-electron-mobility transistors.
AlGaN is often used together with gallium nitride or aluminium nitride, forming heterojunctions.
AlGaN layers are commonly grown on Gallium nitride, on sapphire or (111) Si, almost always with additional GaN layers.
Safety and toxicity aspects
The toxicology of AlGaN has not been fully investigated. The AlGaN dust is an irritant to skin, eyes and lungs. The environment, health and safety aspects of aluminium gallium nitride sources (such as trimethylgallium and ammonia) and industrial hygiene monitoring studies of standard MOVPE sources have been reported recently in a review.
References
External links
Gallium nitride quantum dots and deep UV light emission. GaN in AlN
III-V semiconductors
Aluminium compounds
Gallium compounds
Nitrides
III-V compounds
Light-emitting diode materials | Aluminium gallium nitride | [
"Chemistry"
] | 318 | [
"Inorganic compounds",
"Semiconductor materials",
"III-V semiconductors",
"Light-emitting diode materials",
"III-V compounds"
] |
3,515,035 | https://en.wikipedia.org/wiki/Nabaztag | Nabaztag (Armenian for "hare", նապաստակ (napastak)) is a Wi-Fi enabled ambient electronic device in the shape of a rabbit, invented by Rafi Haladjian and Olivier Mével, and manufactured by the company Violet. Nabaztag was designed to be a "smart object" comparable to those manufactured by Ambient Devices; it can connect to the Internet (to download weather forecasts, read its owner's email, etc.). It is also customizable and programmable to an extent. Sylvain Huet developed most of the embedded code of all Violet objects. Sebastien Bourdeauducq developed the Wi-Fi driver. Antoine Schmitt has been their behavior designer and Jean-Jacques Birgé their sound designer (together they have also composed Nabaz'mob, an opera for 100 Nabaztag). Maÿlis Puyfaucher (who features its French voice) wrote all the original texts pronounced by the rabbit.
On 20 October 2009, following a long period of technical difficulties that ultimately led to Violet's bankruptcy, Mindscape purchased Violet.
In October 2010, Mindscape announced a third generation Nabaztag, called "Karotz". Karotz was released April 2012.
On 27 July 2011, Mindscape stopped the maintenance of the Nabaztag and released its source code.
On 23 December 2011, it was announced that the Nabaztag rabbits would be "coming back to life" on 24 December 2011 at midnight via email to those with Violet accounts. The required server service at Nabaztag.com has since stopped functioning, but other options exist including setting up a server using OpenNab software or installing NabaztagLives on a Raspberry Pi single-board computer.
Features
Out of the box, the Nabaztag rabbit is in height and weighs . It can send and receive MP3s and messages that are read out loud as well as perform the following services (by either speaking the information out loud or using indicative lights): weather forecast, stock market report, news headlines, alarm clock, e-mail alerts, RSS-Feeds, MP3-Streams and others.
There is an API, with bindings for multiple programming languages including Java, Perl, Python, or PHP, available to program the Nabaztag.
At first speaking only in English and French, as of June 2007, Nabaztag fully supports services in German, Italian, Spanish and Dutch.
The rabbits can be customized with Skinz tattoos, detachable USB tails in various colors, and many interchangeable ears with different designs and colors. The Nabaztag:tag and Karotz versions are capable of reading RFID chips, and products containing RFID chips include the Flatanoz key ring tags, miniature models of the rabbits called Nano:ztag, and some children's books. Individual RFID chips were also produced and were called ztamp:s.
Community
Nabaztag owners can join social networks to share photos and videos on websites like Flickr and YouTube. Users can create podcasts (dubbed "Nabcasts" by Violet). There are currently over 100 of these available, mostly in English and French, created by different users on a variety of topics.
Since Nabaztags can be programmed to provide new services using the API, there are user-created applications available, including a Dashboard widget and a lottery alarm.
Infrastructure shortcomings
In December 2006 (most notably around Christmas) a number of sold rabbits caused issues for Violet, the maker of Nabaztag. The Nabaztag device acts as a client to the France-based servers. When users attempted to register their new devices, the centralized servers were unable to handle the demand, resulting in service disruptions, server unavailability, and data integrity problems caused by users creating multiple half-finished registrations. This resulted in a major customer service problem for Violet. The fundamental philosophy of Nabaztag, that all objects should be connected together on the Internet by a server maintained by Violet did not work as expected (e.g. the server sometimes could not cope with volume of traffic, services had to be switched off and there were unreliable response times often as slow as hours rather than seconds).
In March 2008, Violet changed their server infrastructure and bunny software to use the standard XMPP protocol. Bunnies were thereafter reacting much more rapidly on average, although long delays still occurred sometimes. The change caused service disruptions and problems for a couple of weeks.
Technical specifications
The device embeds a PIC18F6525 microcontroller, a BenQ PC card 802.11b Wi-Fi adapter, an ml2870a Audio-PCM sound generator, an ADPCM converter, two motors to activate the ears, a TLC5922 LED controller, and a small amount of memory.
The embedded software handles the TCP/IP stack and Wi-Fi driver. It also implements a virtual machine which is able to execute up to 64 kb of code. A dedicated assembly language exists to program the different features of the device.
Nabaztag/tag
Out on market on 12 December 2006, Nabaztag/tag is an improved model of Violet's Nabaztag. The new model supports MP3 audio streaming for Internet radio (with preset radio stations and an app allowing to add your own stream that does not work) and podcasts. This second version Nabaztag has also added a microphone that allows for voice activation of some of its services. However, despite text on the website claiming that new services will be available soon, the number of working voice activated services remains less than a handful. A final added feature is a built-in RFID reader to detect special-purpose RFID tags (i.e. ISO/IEC 14443 Type B). Nabaztag advertisement is presenting the ability to identify objects (depicted are e.g. keys).
Nabaztag/tag can, as of November 2007, use RFID tags to read special edition versions of children's books by the French publisher Gallimard Jeunesse. In October 2008, Violet launched RFID Children's Books with Penguin Publishing House. Further RFID services and support have been promised. Violet has now started selling the Zstamps and Nano:ztags (little mini Rabbits with Zstamps inside them) and another called mir:ror which is its own RFID system separate from the Nabaztag.
The Wi-Fi was also upgraded to support WPA encryption, and now uses a cheaper SoftMAC card instead of the BenQ device which embedded its own 802.11 protocol stack.
Cessation of service
Mindscape, filing for bankruptcy, discontinued the Nabaztag/tag web service in late July 2011.
Karotz
Karotz is the third generation Nabaztag, and first to be released since the Mindscape purchase. Like its predecessors, Karotz connects to the Internet using Wi-Fi and has RFID reading capability. Additionally, it includes an integrated web cam, a USB port (which can be used for power as well as connectivity), and 256 MB of onboard storage. Karotz was released in April 2011 and is heavily integrated with Facebook and Twitter.
In October 2011 Mindscape was taken over by Aldebaran Robotics quoting "Together we shall go on with this wonderful adventure".
In October 2014 Aldebaran Robotics announce"The end of Karotz's adventures" with "nearly 10 years after its first appearance, Karotz is facing a very strong technological competition: the connected devices are now 4G, mobile and evolutionary. Karotz and its users have not only helped establish connected devices; they have paved the way. New products make a stronger match to market needs, marking the end of Karotz's great story."
"Karotz's servers and customer service will be stopped on February 18th, 2015"
Karotz private servers
In 2016, two years after "The end of Karotz's adventures" a new English enabled API has been set up to bring Karotz out of retirement from a volunteer initiative called Free Rabbits!
Security
During DEF CON 21, Daniel Crowley, Jennifer Savage and David Bryan pointed out several security vulnerabilities about the Karotz. Unsecured connections to the website's provider could allow a hacker to steal the users' Wi-Fi passwords, take control of the Karotz, installing malware and even to corrupt the device without any knowledge of the end user. During the same talk, they also show how it was possible to spy around the Karotz due to the presence of the camera and the microphone embedded in the bunny.
Awards
Violet was awarded honourable mentions, category Small companies of the DME Award 2007 for Nabaztag.
Nabaztag/tag was awarded for Netxplorateur of the Year in 2008.
Nabaz'mob was awarded by Prix Ars Electronica Digital Musics 2009 (Award of Distinction).
See also
Internet of Things
Ubiquitous computing
Digital pet
References
External links
Music&Radio on your NabaztagMusics/Radios on Nabaztag
The NabzoneWidgets and customisations for the Nabaztag
Nabaz'mobSite about the "opera for 100 smart rabbits" event
karotz.comOfficial site of Nabaztag successor
Virtual pets
Entertainment robots
Personal assistant robots
Robotic animals
2000s robots
Robots of France
Rabbits and hares in popular culture | Nabaztag | [
"Biology"
] | 1,982 | [
"Animals",
"Robotic animals"
] |
6,183,028 | https://en.wikipedia.org/wiki/Deoxycytidine%20triphosphate | Deoxycytidine triphosphate (dCTP) is a nucleoside triphosphate that contains the pyrimidine base cytosine. The triphosphate group contains high-energy phosphoanhydride bonds, which liberate energy when hydrolized.
DNA polymerase enzymes use this energy to incorporate deoxycytidine into a newly synthesized strand of DNA. A chemical equation can be written that represents the process:
(DNA)n + dCTP ↔ (DNA)n-C + PPi
That is, dCTP has the PPi (pyrophosphate) cleaved off and the dCMP is incorporated into the DNA strand at the 3' end.
Subsequent hydrolysis of the PPi drives the equilibrium of the reaction toward the right side, i.e. incorporation of the nucleotide in the growing DNA chain.
Like other nucleoside triphosphates, manufacturers recommend that dCTP be stored in aqueous solution at −20 °C.
See also
DNA replication
References
External links
Definitive Guide to dNTPs
Molecular biology
Nucleotides
Phosphate esters
Pyrimidones | Deoxycytidine triphosphate | [
"Chemistry",
"Biology"
] | 239 | [
"Biochemistry",
"Molecular biology"
] |
6,187,083 | https://en.wikipedia.org/wiki/Nucleon%20spin%20structure | Nucleon spin structure describes the partonic structure of nucleon (proton and neutron) intrinsic angular momentum (spin). The key question is how the nucleon's spin, whose magnitude is 1/2ħ, is carried by its constituent partons (quarks and gluons). It was originally expected before the 1980s that quarks carry all of the nucleon spin, but later experiments contradict this expectation. In the late 1980s, the European Muon Collaboration (EMC) conducted experiments that suggested the spin carried by quarks is not sufficient to account for the total spin of the nucleons. This finding astonished particle physicists at that time, and the problem of where the missing spin lies is sometimes referred to as the proton spin crisis.
Experimental research on these topics has been continued by the Spin Muon Collaboration (SMC) and the COMPASS experiment at CERN, experiments E142, E143, E154 and E155 at SLAC, HERMES at DESY, experiments at JLab and RHIC, and others. Global analysis of data from all major experiments confirmed the original EMC discovery and showed that the quark spin did contribute about 30% to the total spin of the nucleon. A major topic of modern particle physics is to find the missing angular momentum, which is believed to be carried either by gluon spin, or by gluon and quark orbital angular momentum. This fact is expressed by the sum rule,
The gluon spin components are being measured by many experiments. Quark and gluon angular momenta will be studied by measuring so-called generalized parton distributions (GPD) through deeply virtual compton scattering (DVCS) experiments, conducted at CERN (COMPASS) and at Jefferson Lab, among other laboratories.
External links
Polarized colliders may prove to be the key in mapping out proton spin structure
Dr. Deshpande research webpage
Spin Muon Collaboration
HERMES
COMPASS
The Spin Structure of the Nucleon - Status and Recent Results
Standard Model | Nucleon spin structure | [
"Physics"
] | 425 | [
"Standard Model",
"Particle physics"
] |
18,536,695 | https://en.wikipedia.org/wiki/Nanomorphic%20cell | The nanomorphic cell is a conception of an atomic-level, integrated, self-sustaining microsystem with five main functions: internal energy supply, sensing, actuation, computation and communication. Atomic level integration provides the ultimate functionality per unit volume for microsystems. The nanomorphic cell abstraction allows one to analyze the fundamental limits of attainable performance for nanoscale systems in much the same way that the Turing Machine and the Carnot Engine support such limit studies for information processing and heat engines respectively.
The nanomorphic cell concept is inspired by the trend, synergistic with semiconductor device scaling; to use these core technologies for diverse integrated system applications. This trend is called Functional Diversification and is characterized by the integration of non-CMOS devices such as sensors, actuators, energy sources etc. with traditional CMOS and other novel information processing devices. The multifunctional microsystems becomes morphic (literally means in the shape of ) because its architecture are defined by the specific application and the fundamental limits on volumetric system parameters.
The nanomorphic cell model was applied to analyze the capabilities of an autonomous integrated microsystem on the order of the size of a living cell, i.e. a cube of 10 micrometer on a side [1, 2]. The function of this microsystem is, for example, upon injection into the body, to interact with living cells, e.g. determine the state of the cell and to support certain “therapeutic” action. It must have the capability to collect data on the living cell, analyze the data, and make a decision on the state of the living cell. It must also communicate with an external controlling agent, and possibly, take corrective action. Such a cell would need its own energy sources, sensors, computers, and communication devices, integrated into a complete system whose structure is dictated by the intended nanomorphic cell function. The Nanomorphic Cell can be considered as an extreme example of a class of systems known generically as Autonomous Microsystems, for example WIMS (Wireless Integrated Microsystems), PicoNode, Lab-on-a-Pill and Smartdust.
References
Microtechnology
Nanotechnology | Nanomorphic cell | [
"Materials_science",
"Engineering"
] | 445 | [
"Nanotechnology",
"Materials science",
"Microtechnology"
] |
18,539,917 | https://en.wikipedia.org/wiki/Computational%20geophysics | Computational geophysics is the field of study that uses any type of numerical computations to generate and analyze models of complex geophysical systems. It can be considered an extension, or sub-field, of both computational physics and geophysics. In recent years, computational power, data availability, and modelling capabilities have all improved exponentially, making computational geophysics a more populated discipline. Due to the large computational size of many geophysical problems, high-performance computing can be required to handle analysis. Modeling applications of computational geophysics include atmospheric modelling, oceanic modelling, general circulation models, and geological modelling. In addition to modelling, some problems in remote sensing fall within the scope of computational geophysics such as tomography, inverse problems, and 3D reconstruction.
Geophysical models
The generation of geophysical models are a key component of computational geophysics. Geophysical models are defined as "physical-mathematical descriptions of temporal and/or spatial changes in important geological variables, as derived from accepted laws, theories, and empirical relationships." Geophysical models are frequently used by researchers in all disciplines of environmental science.
In climate science, atmospheric, oceanic, and general circulation models are a crucial standby for researchers. Although remote sensing has been steadily providing more and more in-situ measurements of geophysical variables, nothing comes close to the temporal and geospatial resolution of data provided by models. Although data can be subject to accuracy issues due to the extrapolation techniques used, the usage of modeled data is a commonly accepted practice in climate and meteorological sciences. Oftentimes, these models will be used in concert with in-situ measurements.
A few well-known models are
NCEP/NCAR Reanalysis Project, an atmospheric model
Global Forecast System, a numerical weather prediction model
HYCOM, a general ocean circulation model
Geological system models are frequently used in research, but have less public data availability than climatic and meteorological models. There is a wide range of software available that allows for geomodelling.
Remote sensing
The United States Geological Survey (USGS) defines remote sensing as the measurement of some property by transmitting some type of radiation at a distance, and measuring the emitted and reflected radiation. Remote sensing can involve satellites, cameras, and sound wave emission. Remote sensing is inherently a type of indirect measurement, meaning that some type of computation must be completed in order to obtain a measurement of the property of interest. For some applications, these computations can be highly complex. In addition, the analysis of these data products can be classified as computational geophysics.
Programs of study
In Canada, computational geophysics is offered as a university major in the form of a BSc (Hon.) with co-op at Carleton University.
Elsewhere, Rice University has a Center for Computational Geophysics, while Princeton University, the University of Texas, and California Institute of Technology have similar research centers. Experts, laboratories, projects, internships, undergraduate programs, graduate programs and/or facilities in the program exist at the University of Queensland, Wyoming University, Boston University, Stanford University, Uppsala University, Kansas State University, Kingston University, Australian National University, University of California, San Diego, University of Washington, Nanyang Technological University, ETH Zurich, University of Sydney, Appalachian State University, University of Minnesota, University of Tasmania, Bahria University, Boise State University, University of Michigan, University of Oulu, University of Utah, and others.
Laboratories
Federal organizations that study or apply computational geophysics include
Earth System Research Laboratories at NOAA
Earth Sciences Division at NASA
Computational Geophysics Lab at the Earth Observatory of Singapore
References
See also
Computational fluid dynamics
History of geophysics
List of ocean circulation models
Meteorological reanalysis
Numerical weather prediction
Computational science
Geophysics
Computational fields of study | Computational geophysics | [
"Physics",
"Mathematics",
"Technology"
] | 750 | [
"Computational fields of study",
"Applied and interdisciplinary physics",
"Applied mathematics",
"Computational science",
"Computing and society",
"Geophysics"
] |
1,846,548 | https://en.wikipedia.org/wiki/International%20Celestial%20Reference%20System%20and%20its%20realizations | The International Celestial Reference System (ICRS) is the current standard celestial reference system adopted by the International Astronomical Union (IAU). Its origin is at the barycenter of the Solar System, with axes that are intended to "show no global rotation with respect to a set of distant extragalactic objects". This fixed reference system differs from previous reference systems, which had been based on Catalogues of Fundamental Stars that had published the positions of stars based on direct "observations of [their] equatorial coordinates, right ascension and declination" and had adopted as "privileged axes ... the mean equator and the dynamical equinox" at a particular date and time.
The International Celestial Reference Frame (ICRF) is a realization of the International Celestial Reference System using reference celestial sources observed at radio wavelengths. In the context of the ICRS, a reference frame (RF) is the physical realization of a reference system, i.e., the reference frame is the set of numerical coordinates of the reference sources, derived using the procedures spelled out by the ICRS.
More specifically, the ICRF is an inertial barycentric reference frame whose axes are defined by the measured positions of extragalactic sources (mainly quasars) observed using very-long-baseline interferometry while the Gaia-CRF is an inertial barycentric reference frame defined by optically measured positions of extragalactic sources by the Gaia satellite and whose axes are rotated to conform to the ICRF. Although general relativity implies that there are no true inertial frames around gravitating bodies, these reference frames are important because they do not exhibit any measurable angular rotation since the extragalactic sources used to define the ICRF and the Gaia-CRF are so far away. The ICRF and the Gaia-CRF are now the standard reference frames used to define the positions of astronomical objects.
Reference systems and frames
It is useful to distinguish reference systems and reference frames. A reference frame has been defined as "a catalogue of the adopted coordinates of a set of reference objects that serves to define, or realize, a particular coordinate frame". A reference system is a broader concept, encompassing "the totality of procedures, models and constants that are required for the use of one or more reference frames".
Realizations
The ICRF is based on hundreds of extra-galactic radio sources, mostly quasars, distributed around the entire sky. Because they are so distant, they are apparently stationary to our current technology, yet their positions can be measured very accurately by Very Long Baseline Interferometry (VLBI). The positions of most are known to 1 milliarcsecond (mas) or better.
In August 1997, the International Astronomical Union resolved in Resolution B2 of its XXIIIrd General Assembly "that the Hipparcos Catalogue shall be the primary realization of the ICRS at optical wavelengths." The Hipparcos Celestial Reference Frame (HCRF) is based on a subset of about 100,000 stars in the Hipparcos Catalogue. In August 2021 the International Astronomical Union decided in Resolution B3 of its XXXIst General Assembly "that as from 1 January 2022, the fundamental realization of the International Celestial Reference System (ICRS) shall comprise the Third Realization of the International Celestial Reference Frame (ICRF3) for the radio domain and the Gaia-CRF3 for the optical domain."
Radio wavelengths (ICRF)
ICRF1
The ICRF, now called ICRF1, was adopted by the International Astronomical Union (IAU) as of 1 January 1998. ICRF1 was oriented to the axes of the ICRS, which reflected the prior astronomical reference frame The Fifth Fundamental Catalog (FK5). It had an angular noise floor of approximately 250 microarcseconds (μas) and a reference axis stability of approximately 20 μas; this was an order-of-magnitude improvement over the previous reference frame derived from (FK5). The ICRF1 contains 212 defining sources and also contains positions of 396 additional non-defining sources for reference. The positions of these sources have been adjusted in later extensions to the catalogue. ICRF1 agrees with the orientation of the Fifth Fundamental Catalog (FK5) "J2000.0" frame to within the (lower) precision of the latter.
ICRF2
An updated reference frame ICRF2 was created in 2009. The update was a joint collaboration of the International Astronomical Union, the International Earth Rotation and Reference Systems Service, and the International VLBI Service for Geodesy and Astrometry. ICRF2 is defined by the position of 295 compact radio sources (97 of which also define ICRF1). Alignment of ICRF2 with ICRF1-Ext2, the second extension of ICRF1, was made with 138 sources common to both reference frames. Including non-defining sources, it comprises 3414 sources measured using very-long-baseline interferometry. The ICRF2 has a noise floor of approximately 40 μas and an axis stability of approximately 10 μas. Maintenance of the ICRF2 will be accomplished by a set of 295 sources that have especially good positional stability and unambiguous spatial structure.
The data used to derive the reference frame come from approximately 30 years of VLBI observations, from 1979 to 2009. Radio observations in both the S-band (2.3 GHz) and X-band (8.4 GHz) were recorded simultaneously to allow correction for ionospheric effects. The observations resulted in about 6.5 million group-delay measurements among pairs of telescopes. The group delays were processed with software that takes into account atmospheric and geophysical processes. The positions of the reference sources were treated as unknowns to be solved for by minimizing the mean squared error across group-delay measurements. The solution was constrained to be consistent with the International Terrestrial Reference Frame (ITRF2008) and earth orientation parameters (EOP) systems.
ICRF3
ICRF3 is the third major revision of the ICRF, and was adopted by the IAU in August 2018 and became effective 1 January 2019. The modeling incorporates the effect of the galactocentric acceleration of the solar system, a new feature over and above ICRF2. ICRF3 also includes measurements at three frequency bands, providing three independent, and slightly different, realizations of the ICRS: dual frequency measurements at 8.4 GHz (X band) and 2.3 GHz (S band) for 4536 sources; measurements of 824 sources at 24 GHz (K band), and dual frequency measurements at 32 GHz (Ka band) and 8.4 GHz (X band) for 678 sources. Of these, 303 sources, uniformly distributed on the sky, are identified as "defining sources" which fix the axes of the frame. ICRF3 also increases the number of defining sources in the southern sky.
Optical wavelengths
Hipparcos Celestial Reference Frame (HCRF)
In 1991 the International Astronomical Union recommended "that observing programmes be undertaken or continued in order to ... determine the relationship between catalogues of extragalactic source positions and ... the [stars of the] FK5 and Hipparcos catalogues." Using a variety of linking techniques, the coordinate axes defined by the Hipparcos catalogue were aligned with the extragalactic radio frame. In August 1997, the International Astronomical Union recognized in Resolution B2 of its XXIIIrd General Assembly "That the Hipparcos Catalogue was finalized in 1996 and that its coordinate frame is aligned to that of the frame of the extragalactic sources [ICRF1] with one sigma uncertainties of ±0.6 milliarcseconds (mas)" and resolved "that the Hipparcos Catalogue shall be the primary realization of the ICRS at optical wavelengths."
Second Gaia celestial reference frame (Gaia–CRF2)
The second Gaia celestial reference frame (Gaia–CRF2), based on 22 months of observations of over half a million extragalactic sources by the Gaia spacecraft, appeared in 2018 and has been described as "the first full-fledged optical realisation of the ICRS, that is to say, an optical reference frame built only on extragalactic sources." The axes of Gaia-CRF2 were aligned to a prototype version of the forthcoming ICRF3 using 2820 objects common to Gaia-CRF2 and to the ICRF3 prototype.
Third Gaia celestial reference frame (Gaia–CRF3)
The third Gaia celestial reference frame (Gaia–CRF3) is based on 33 months of observations of 1,614,173 extragalactic sources. As with the earlier Hipparcos and Gaia reference frames, the axes of Gaia-CRF3 were aligned to 3142 optical counterparts of ICRF-3 in the S/X frequency bands. In August 2021 the International Astronomical Union noted that the Gaia-CRF3 had "largely superseded the Hipparcos Catalogue" and was "de facto the optical realization of the Celestial Reference Frame within the astronomical community." Consequently, the IAU decided that Gaia-CRF3 shall be "the fundamental realization of the International Celestial Reference System (ICRS) ... for the optical domain."
See also
Astrometry
Astronomy
Barycentric and geocentric celestial reference systems
International Terrestrial Reference System and Frame
References
Further reading
Kovalevsky, Jean; Mueller, Ivan Istvan; Kołaczek, Barbara (1989) Reference Frames in Astronomy and Geophysics, Astrophysics and Space Science Library, Volume 154 Kluwer Academic Publishers
External links
International Celestial Reference System (ICRS) from USNO
Overview of ICRS and ICRF
IERS Conventions 2003 (defines ICRS and other related standards)
ICRF page from the International Earth Rotation Service
General information on the ICRS from IERS
ICRS Product Center
Astronomical coordinate systems
Astrometry
Frames of reference | International Celestial Reference System and its realizations | [
"Physics",
"Astronomy",
"Mathematics"
] | 2,054 | [
"Coordinate systems",
"Frames of reference",
"Astrometry",
"Classical mechanics",
"Astronomical coordinate systems",
"Theory of relativity",
"Astronomical sub-disciplines"
] |
1,846,827 | https://en.wikipedia.org/wiki/Catastrophe%20modeling | Catastrophe modeling (also known as cat modeling) is the process of using computer-assisted calculations to estimate the losses that could be sustained due to a catastrophic event such as a hurricane or earthquake. Cat modeling is especially applicable to analyzing risks in the insurance industry and is at the confluence of actuarial science, engineering, meteorology, and seismology.
Catastrophes/ Perils
Natural catastrophes (sometimes referred to as "nat cat") that are modeled include:
Hurricane (main peril is wind damage; some models can also include storm surge and rainfall)
Earthquake (main peril is ground shaking; some models can also include tsunami, fire following earthquakes, liquefaction, landslide, and sprinkler leakage damage)
severe thunderstorm or severe convective storms (main sub-perils are tornado, straight-line winds and hail)
Flood
Extratropical cyclone (commonly referred to as European windstorm)
Wildfire
Winter storm
Human catastrophes include:
Terrorism events
Warfare
Casualty/liability events
Forced displacement crises
Cyber data breaches
Lines of business modeled
Cat modeling involves many lines of business, including:
Personal property
Commercial property
Workers' compensation
Automobile physical damage
Limited liabilities
Product liability
Business Interruption
Inputs, Outputs, and Use Cases
The input into a typical cat modeling software package is information on the exposures being analyzed that are vulnerable to catastrophe risk. The exposure data can be categorized into three basic groups:
Information on the site locations, referred to as geocoding data (street address, postal code, county/CRESTA zone, etc.)
Information on the physical characteristics of the exposures (construction, occupation/occupancy, year built, number of stories, number of employees, etc.)
Information on the financial terms of the insurance coverage (coverage value, limit, deductible, etc.)
The output of a cat model is an estimate of the losses that the model predicts would be associated with a particular event or set of events. When running a probabilistic model, the output is either a probabilistic loss distribution or a set of events that could be used to create a loss distribution; probable maximum losses ("PMLs") and average annual losses ("AALs") are calculated from the loss distribution. When running a deterministic model, losses caused by a specific event are calculated; for example, Hurricane Katrina or "a magnitude 8.0 earthquake in downtown San Francisco" could be analyzed against the portfolio of exposures.
Cat models have a variety of use cases for a number of industries, including:
Insurers and risk managers use cat modeling to assess the risk in a portfolio of exposures. This might help guide an insurer's underwriting strategy or help them decide how much reinsurance to purchase.
Some state departments of insurance allow insurers to use cat modeling in their rate filings to help determine how much premium their policyholders are charged in catastrophe-prone areas.
Insurance rating agencies such as A. M. Best and Standard & Poor's use cat modeling to assess the financial strength of insurers that take on catastrophe risk.
Reinsurers and reinsurance brokers use cat modeling in the pricing and structuring of reinsurance treaties.
European insurers use cat models to derive the required regulatory capital under the Solvency II regime. Cat models are used to derive catastrophe loss probability distributions which are components of many Solvency II internal capital models.
Likewise, cat bond investors, investment banks, and bond rating agencies use cat modeling in the pricing and structuring of a catastrophe bond.
Open catastrophe modeling
The Oasis Loss Modelling Framework ("LMF") is an open source catastrophe modeling platform. It developed by a nonprofit organisation funded and owned by the Insurance Industry to promote open access to models and to promote transparency. Additionally, some firms within the insurance industry are currently working with the Association for Cooperative Operations Research and Development (ACORD) to develop an industry standard for collecting and sharing exposure data.
See also
HAZUS
Year loss table
Catastrophe theory
Catastrophe (disambiguation)
References
External links
International Society of Catastrophe Managers
Florida Public Hurricane Loss Model
Insurance Information Institute
LMF source code repository
Actuarial science
Disaster management tools
Natural hazards
Environmental modelling | Catastrophe modeling | [
"Physics",
"Mathematics",
"Environmental_science"
] | 860 | [
"Physical phenomena",
"Earth phenomena",
"Applied mathematics",
"Actuarial science",
"Natural hazards",
"Environmental modelling"
] |
1,847,316 | https://en.wikipedia.org/wiki/Liquid%20Scintillator%20Neutrino%20Detector | The Liquid Scintillator Neutrino Detector (LSND) was a scintillation counter at Los Alamos National Laboratory that measured the number of neutrinos being produced by an accelerator neutrino source. The LSND project was created to look for evidence of neutrino oscillation, and its results conflict with the Standard Model expectation of only three neutrino flavors, when considered in the context of other solar and atmospheric neutrino oscillation experiments. Cosmological data bound the mass of the sterile neutrino to ms < 0.26eV (0.44eV) at 95% (99.9%) confidence limit, excluding at high significance the sterile neutrino hypothesis as an explanation of the LSND anomaly. The controversial LSND result was tested by the MiniBooNE experiment at Fermilab which has found similar evidence for oscillations.
The hint is currently undergoing further tests at MicroBooNE at Fermilab.
The detector consisted of a tank filled with 167 tons (50,000 gallons) of mineral oil and of b-PDB (2-(4-tert-butylphenyl)-5-(4-biphenyl)-1,3,4-oxadiazole) organic scintillator material. Cherenkov light emitted by particle interactions was detected by an array of 1220 photomultiplier tubes. The experiment collected data from 1993 to 1998.
References
Further reading
External links
LSND strengthens evidence for neutrino oscillations
LSND scientific publications
LSND scientific publications, SPIRES database
The Neutrino Oscillations Industry
Accelerator neutrino experiments | Liquid Scintillator Neutrino Detector | [
"Physics"
] | 354 | [
"Particle physics stubs",
"Particle physics"
] |
1,848,012 | https://en.wikipedia.org/wiki/Intrathecal%20administration | Intrathecal administration is a route of administration for drugs via an injection into the spinal canal, or into the subarachnoid space so that it reaches the cerebrospinal fluid (CSF). It is useful in several applications, such as for spinal anesthesia, chemotherapy, or pain management. This route is also used to introduce drugs that fight certain infections, particularly post-neurosurgical. Typically, the drug is given this way to avoid being stopped by the blood–brain barrier, as it may not be able to pass into the brain when given orally. Drugs given by the intrathecal route often have to be compounded specially by a pharmacist or technician because they cannot contain any preservative or other potentially harmful inactive ingredients that are sometimes found in standard injectable drug preparations.
Intrathecal pseudodelivery is a technique where the drug is encapsulated in a porous capsule that is placed in communication with the cerebrospinal CSF. In this method, the drug is not released into the CSF. Instead, the CSF is in communication with the capsule through its porous walls, allowing the drug to interact with its target within the capsule itself. This allows for localized treatment while avoiding systemic distribution of the drug, potentially reducing side effects and enhancing the therapeutic efficacy for conditions affecting the central nervous system.
The route of administration is sometimes simply referred to as "intrathecal"; however, the term is also an adjective that refers to something occurring in or introduced into the anatomic space or potential space inside a sheath, most commonly the arachnoid membrane of the brain or spinal cord (under which is the subarachnoid space). For example, intrathecal immunoglobulin production is production of antibodies in the spinal cord. The abbreviation "IT" is best not used; instead, "intrathecal" is spelled out to avoid medical mistakes.
Applications of intrathecal administration
Analgesics
Intrathecal administration is often used for a single 24-hour dose of analgesia (opioid with local anesthetic). Caution should be exercised with intrathecal opioids due to the risk of late onset hypoventilation. The use of intrathecal morphine may be limited by severe pruritus and urinary retention.
Pethidine has the unusual property of being both a local anaesthetic and opioid analgesic, which occasionally permits its use as the sole intrathecal anaesthetic agent.
An intrathecal pump system can be used to deliver a local anaesthetic, and/or an opioid and/or an atypical analgesic agent as ziconotide.
Antifungals
Amphotericin B is administered intrathecally to treat fungal infections involving the central nervous system infections.
Cancer chemotherapy
Currently, only four agents are licensed for intrathecal cancer chemotherapy: methotrexate, cytarabine, hydrocortisone, and thiotepa.
Administration of any vinca alkaloids, especially vincristine, via the intrathecal route is nearly always fatal.
Baclofen
Often reserved for spastic cerebral palsy, baclofen can be administered through an intrathecal pump implanted just below the skin of the abdomen or behind the chest wall, with a catheter connected directly to the base of the spine. Intrathecal baclofen pumps sometimes carry serious clinical risks, such as infection or a possibly fatal sudden malfunction.
Mesenchymal Stem Cell Therapy
Treatment of chronic spinal injuries via the administration of mesenchymal stem cells, either from adipose tissue or bone marrow, is experimental, with better results from the former method. Introduction of mesenchymal stem cells promote the microenvironment needed for axonal regrowth and reduction of inflammation caused by astrocytes proliferation and glial scar tissue.
Animal models have showed improved motor control under the site of injury. A clinical trial also showed statistically significant improved sensitivity under the site of injury in patients.
See also
Cancer pain/Interventional/Intrathecal pump
History of neuraxial anesthesia
Intrathecal pump
Theca
Thecal sac
References
Medical treatments
Routes of administration
Dosage forms | Intrathecal administration | [
"Chemistry"
] | 886 | [
"Pharmacology",
"Routes of administration"
] |
1,848,140 | https://en.wikipedia.org/wiki/Azurophilic%20granule | An azurophilic granule is a cellular object readily stainable with a Romanowsky stain. In white blood cells and hyperchromatin, staining imparts a burgundy or merlot coloration. Neutrophils in particular are known for containing azurophils loaded with a wide variety of anti-microbial defensins that fuse with phagocytic vacuoles. Azurophils may contain myeloperoxidase, phospholipase A2, acid hydrolases, elastase, defensins, neutral serine proteases, bactericidal permeability-increasing protein, lysozyme, cathepsin G, proteinase 3, and proteoglycans.
Azurophil granules are also known as "primary granules".
Furthermore, the term "azurophils" may refer to a unique type of cells, identified only in reptiles. These cells are similar in size to so-called heterophils with abundant cytoplasm that is finely to coarsely granular and may sometimes contain vacuoles. Granules may impart a purplish hue to the cytoplasm, particularly to the outer region. Occasionally, azurophils are observed with vacuolated cytoplasm.
See also
Azure A
Azure (color)
Granule
Lysosome
Specific granules
Neutrophil degranulation
References
Hematology
Staining | Azurophilic granule | [
"Chemistry",
"Biology"
] | 305 | [
"Staining",
"Microbiology techniques",
"Cell imaging",
"Microscopy"
] |
1,848,689 | https://en.wikipedia.org/wiki/Fusee%20%28horology%29 | A fusee (from the French fusée, wire wound around a spindle) is a cone-shaped pulley with a helical groove around it, wound with a cord or chain attached to the mainspring barrel of antique mechanical watches and clocks. It was used from the 15th century to the early 20th century to improve timekeeping by equalizing the uneven pull of the mainspring as it ran down. Gawaine Baillie stated of the fusee, "Perhaps no problem in mechanics has ever been solved so simply and so perfectly."
History
The origin of the fusee is not known. Many sources erroneously credit clockmaker Jacob Zech of Prague with inventing it around 1525. The earliest definitely dated fusee clock was made by Zech in 1525, but the fusee actually appeared earlier, with the first spring driven clocks in the 15th century. The idea probably did not originate with clockmakers, since the earliest known example is in a crossbow windlass shown in a 1405 military manuscript. Drawings from the 15th century by Filippo Brunelleschi and Leonardo da Vinci show fusees. The earliest existing clock with a fusee, also the earliest spring-powered clock, is the Burgunderuhr (Burgundy clock), a chamber clock whose iconography suggests that it was made for Philip the Good, Duke of Burgundy about 1430, and preserved in the Germanisches Nationalmuseum. The word fusee comes from the French fusée and late Latin fusata, 'spindle full of thread'.
Springs were first employed to power clocks in the 15th century, to make them smaller and portable. These early spring-driven clocks were much less accurate than weight-driven clocks. Unlike a weight on a cord, which exerts a constant force to turn the clock's wheels, the force a spring exerts diminishes as the spring unwinds. The primitive verge and foliot timekeeping mechanism, used in all early clocks, was sensitive to changes in drive force. So early spring-driven clocks slowed down over their running period as the mainspring unwound, causing inaccurate timekeeping. This problem is called lack of isochronism.
Two solutions to this problem appeared with the first spring driven clocks; the stackfreed and the fusee. The stackfreed, a crude cam compensator, added a lot of friction and was abandoned after less than a century. The fusee was a much more lasting idea. As the movement ran, the tapering shape of the fusee pulley continuously changed the mechanical advantage of the pull from the mainspring, compensating for the diminishing spring force. Clockmakers apparently empirically discovered the correct shape for the fusee, which is not a simple cone but a hyperboloid. The first fusees were long and slender, but later ones have a more squat compact shape. Fusees became the standard method of getting constant force from a mainspring, used in most spring-wound clocks, and watches when they appeared in the 17th century.
At first the fusee cord was made of gut, or sometimes wire. Around 1650 chains began to be used, which lasted longer. Gruet of Geneva is widely credited with introducing them in 1664, although the first reference to a fusee chain is around 1540. Fusees designed for use with cords can be distinguished by their grooves, which have a circular cross section, where ones designed for chains have rectangular-shaped grooves.
Around 1726 John Harrison added the maintaining power spring to the fusee to keep marine chronometers running during winding, and this was generally adopted.
Operation
The mainspring is coiled around a stationary axle (arbor), inside a cylindrical box, the barrel. The force of the spring turns the barrel. In a fusee clock, the barrel turns the fusee by pulling on the chain, and the fusee turns the clock's gears.
When the mainspring is wound up (Fig. 1), all the chain is wrapped around the fusee from bottom to top, and the end going to the barrel comes off the narrow top end of the fusee. So the strong pull of the wound up mainspring is applied to the small end of the fusee, and the torque on the fusee is reduced by the small lever arm of the fusee radius.
As the clock runs, the chain is unwound from the fusee from top to bottom and wound on the barrel.
As the mainspring runs down (Fig. 2), more of the chain is wrapped on the barrel, and the chain going to the barrel comes off the wide bottom grooves of the fusee. Then the weakened pull of the mainspring is applied to the larger radius of the bottom of the fusee. The greater turning moment provided by the larger radius at the fusee compensates for the weaker force of the spring, keeping the drive torque constant.
To wind the clock up again, a key is fitted to the protruding squared off axle (winding arbor) of the fusee and the fusee is turned. The pull of the fusee unwinds the chain off the barrel and back onto the fusee, turning the barrel and winding the mainspring. The presence of the fusee means that the force required to wind up the mainspring is constant; it does not increase as the mainspring tightens.
The gear on the fusee drives the movement's wheel train, usually the center wheel. There is a ratchet between the fusee and its gear (not visible, inside the fusee) which prevents the fusee from turning the clock's wheel train backwards while it is being wound up. In quality watches and many later fusee movements there is also a maintaining power spring, to provide temporary force to keep the movement going while it is being wound. This type is called a going fusee. It is usually a planetary gear mechanism (epicyclic gearing) in the base of the fusee "cone" which then provides turning power in the opposite direction to the 'winding up' direction therefore keeping the watch or clock running during winding.
Most fusee clocks and watches include a 'winding stop' mechanism to prevent the mainspring and fusee from being wound up too far, possibly breaking the chain. As it is wound, the fusee chain rises toward the top of the fusee. When it reaches the top, it presses against a lever, which moves a metal blade into the path of a projection sticking out from the edge of the fusee. As the fusee turns, the projection catches on the blade, preventing further winding.
The normal fusee can only be wound in one direction. "Drunken" fusees were developed, but rarely used, to allow the fusee to be wound in either direction. John Arnold unsuccessfully used them in a few marine chronometers.
Obsolescence
The fusee was a good mainspring compensator, but it was also expensive, difficult to adjust, and had other disadvantages:
It was bulky and tall, and made pocket watches unfashionably thick.
If the mainspring broke and had to be replaced, a frequent occurrence with early mainsprings, the fusee had to be readjusted to the new spring.
If the fusee chain broke, the force of the mainspring sent the end whipping about the inside of the clock, causing damage.
Achieving isochrony was recognised as a serious problem throughout the 500-year history of spring-driven clocks. Many parts were gradually improved to increase isochronism, and eventually the fusee became unnecessary in most timepieces.
The invention of the pendulum and the balance spring in the mid-17th century made clocks and watches much more isochronous, by making the timekeeping element a harmonic oscillator, with a natural "beat" resistant to change. The pendulum clock with an anchor escapement, invented in 1670, was sufficiently independent of drive force so that only a few had fusees.
In pocketwatches, the verge escapement, which required a fusee, was gradually replaced by escapements which were less sensitive to changes in mainspring force: the cylinder and later the lever escapement. In 1760, Jean-Antoine Lépine dispensed with the fusee, inventing a going barrel to power the watch gear train directly. This contained a very long mainspring, of which only a few turns were used to power the watch. Accordingly, only a part of the mainspring's 'torque curve' was used, where the torque was approximately constant. In the 1780s, pursuing thinner watches, French watchmakers adopted the going barrel with the cylinder escapement. By 1850, the Swiss and American watchmaking industries employed the going barrel exclusively, aided by new methods of adjusting the balance spring so that it was isochronous. England continued to make the bulkier full plate fusee watches until about 1900. They were inexpensive models sold to the lower classes and were derisively called "turnips". After this, the only remaining use for the fusee was in marine chronometers, where the highest precision was needed, and bulk was less of a disadvantage, until they became obsolete in the 1970s.
References
, p. 121
, p. 127-128
, p. 63-69
Notes
External links
Kover, London – 18th Century Watchmaker Blog to discover Kover pocket watches that still exist and documents that refer to the watchmaker
Timekeeping components
Horology | Fusee (horology) | [
"Physics",
"Technology"
] | 1,963 | [
"Physical quantities",
"Horology",
"Time",
"Timekeeping components",
"Spacetime",
"Components"
] |
1,848,952 | https://en.wikipedia.org/wiki/Electrical%20resistivities%20of%20the%20elements%20%28data%20page%29 |
Electrical resistivity
References
WEL
As quoted at http://www.webelements.com/ from these sources:
G.W.C. Kaye and T. H. Laby in Tables of physical and chemical constants, Longman, London, UK, 15th edition, 1993.
A.M. James and M.P. Lord in Macmillan's Chemical and Physical Data, Macmillan, London, UK, 1992.
D.R. Lide, (ed.) in Chemical Rubber Company handbook of chemistry and physics, CRC Press, Boca Raton, Florida, USA, 79th edition, 1998.
J.A. Dean (ed) in Lange's Handbook of Chemistry, McGraw-Hill, New York, USA, 14th edition, 1992.
CRC
As quoted from various sources in an online version of:
David R. Lide (ed), CRC Handbook of Chemistry and Physics, 84th Edition. CRC Press. Boca Raton, Florida, 2003; Section 12, Properties of Solids; Electrical Resistivity of Pure Metals
CR2
As quoted in an online version of:
David R. Lide (ed), CRC Handbook of Chemistry and Physics, 84th Edition. CRC Press. Boca Raton, Florida, 2003; Section 4, Properties of the Elements and Inorganic Compounds; Physical Properties of the Rare Earth Metals
which further refers to:
Beaudry, B. J. and Gschneidner, K.A., Jr., in Handbook on the Physics and Chemistry of Rare Earths, Vol. 1, Gschneidner, K.A., Jr. and Eyring, L., Eds., North-Holland Physics, Amsterdam, 1978, 173.
McEwen, K.A., in Handbook on the Physics and Chemistry of Rare Earths, Vol. 1, Gschneidner, K.A., Jr. and Eyring, L., Eds., North-Holland Physics, Amsterdam, 1978, 411.
LNG
As quoted from:
J.A. Dean (ed), Lange's Handbook of Chemistry (15th Edition), McGraw-Hill, 1999; Section 4, Table 4.1 Electronic Configuration and Properties of the Elements
See also
Chemical properties
Chemical element data pages
Elements | Electrical resistivities of the elements (data page) | [
"Physics",
"Chemistry",
"Mathematics"
] | 472 | [
"Physical quantities",
"Chemical data pages",
"Quantity",
"Chemical element data pages",
"nan",
"Wikipedia categories named after physical quantities",
"Electrical resistance and conductance"
] |
1,850,013 | https://en.wikipedia.org/wiki/Zitterbewegung | In physics, the zitterbewegung (, ) is the theoretical prediction of a rapid oscillatory motion of elementary particles that obey relativistic wave equations. This prediction was first discussed by Gregory Breit in 1928 and later by Erwin Schrödinger in 1930 as a result of analysis of the wave packet solutions of the Dirac equation for relativistic electrons in free space, in which an interference between positive and negative energy states produces an apparent fluctuation (up to the speed of light) of the position of an electron around the median, with an angular frequency of , or approximately radians per second.
This apparent oscillatory motion is often interpreted as an artifact of using the Dirac equation in a single particle description. For the hydrogen atom, the zitterbewegung is related to the Darwin term, a small correction of the energy level of the s-orbitals.
Theory
Free spin-1/2 fermion
The time-dependent Dirac equation is written as
,
where is the reduced Planck constant, is the wave function (bispinor) of a fermionic particle spin-1/2, and is the Dirac Hamiltonian of a free particle:
,
where is the mass of the particle, is the speed of light, is the momentum operator, and and are matrices related to the Gamma matrices , as and .
In the Heisenberg picture, the time dependence of an arbitrary observable obeys the equation
In particular, the time-dependence of the position operator is given by
.
where is the position operator at time .
The above equation shows that the operator can be interpreted as the -th component of a "velocity operator".
Note that this implies that
,
as if the "root mean square speed" in every direction of space is the speed of light.
To add time-dependence to , one implements the Heisenberg picture, which says
.
The time-dependence of the velocity operator is given by
,
where
Now, because both and are time-independent, the above equation can easily be integrated twice to find the explicit time-dependence of the position operator.
First:
,
and finally
.
The resulting expression consists of an initial position, a motion proportional to time, and an oscillation term with an amplitude equal to the reduced Compton wavelength. That oscillation term is the so-called zitterbewegung.
Interpretation
In quantum mechanics, the zitterbewegung term vanishes on taking expectation values for wave-packets that are made up entirely of positive- (or entirely of negative-) energy waves. The standard relativistic velocity can be recovered by taking a Foldy–Wouthuysen transformation, when the positive and negative components are decoupled. Thus, we arrive at the interpretation of the zitterbewegung as being caused by interference between positive- and negative-energy wave components.
In quantum electrodynamics (QED) the negative-energy states are replaced by positron states, and the zitterbewegung is understood as the result of interaction of the electron with spontaneously forming and annihilating electron-positron pairs.
More recently, it has been noted that in the case of free particles it could just be an artifact of the simplified theory. Zitterbewegung appears as due to the "small components" of the Dirac 4-spinor, due to a little bit of antiparticle mixed up in the particle wavefunction for a nonrelativistic motion. It doesn't appear in the correct second quantized theory, or rather, it is resolved by using Feynman propagators and doing QED. Nevertheless, it is an interesting way to understand certain QED effects heuristically from the single particle picture.
Zigzag picture of fermions
An alternative perspective of the physical meaning of zitterbewegung was provided by Roger Penrose, by observing that the Dirac equation can be reformulated by splitting the four-component Dirac spinor into a pair of massless left-handed and right-handed two-component spinors (or zig and zag components), where each is the source term in the other's equation of motion, with a coupling constant proportional to the original particle's rest mass , as
.
The original massive Dirac particle can then be viewed as being composed of two massless components, each of which continually converts itself to the other. Since the components are massless they move at the speed of light, and their spin is constrained to be about the direction of motion, but each has opposite helicity: and since the spin remains constant, the direction of the velocity reverses, leading to the characteristic zigzag or zitterbewegung motion.
Experimental simulation
Zitterbewegung of a free relativistic particle has never been observed directly, although some authors believe they have found evidence in favor of its existence. It has also been simulated in atomic systems that provide analogues of a free Dirac particle. The first such example, in 2010, placed a trapped ion in an environment such that the non-relativistic Schrödinger equation for the ion had the same mathematical form as the Dirac equation (although the physical situation is different). Zitterbewegung-like oscillations of ultracold atoms in optical lattices were predicted in 2008. In 2013, zitterbewegung was simulated in a Bose–Einstein condensate of 50,000 atoms of 87Rb confined in an optical trap.
An optical analogue of zitterbewegung was demonstrated in a quantum cellular automaton implemented with orbital angular momentum states of light
Other proposals for condensed-matter analogues include semiconductor nanostructures, graphene and topological insulators.
See also
Casimir effect
Lamb shift
References
Further reading
External links
Zitterbewegung in New Scientist
Quantum field theory | Zitterbewegung | [
"Physics"
] | 1,227 | [
"Quantum field theory",
"Quantum mechanics"
] |
1,850,216 | https://en.wikipedia.org/wiki/Fermat%27s%20theorem%20on%20sums%20of%20two%20squares | In additive number theory, Fermat's theorem on sums of two squares states that an odd prime p can be expressed as:
with x and y integers, if and only if
The prime numbers for which this is true are called Pythagorean primes.
For example, the primes 5, 13, 17, 29, 37 and 41 are all congruent to 1 modulo 4, and they can be expressed as sums of two squares in the following ways:
On the other hand, the primes 3, 7, 11, 19, 23 and 31 are all congruent to 3 modulo 4, and none of them can be expressed as the sum of two squares. This is the easier part of the theorem, and follows immediately from the observation that all squares are congruent to 0 (if number squared is even) or 1 (if number squared is odd) modulo 4.
Since the Diophantus identity implies that the product of two integers each of which can be written as the sum of two squares is itself expressible as the sum of two squares, by applying Fermat's theorem to the prime factorization of any positive integer n, we see that if all the prime factors of n congruent to 3 modulo 4 occur to an even exponent, then n is expressible as a sum of two squares. The converse also holds. This generalization of Fermat's theorem is known as the sum of two squares theorem.
History
Albert Girard was the first to make the observation, characterizing the positive integers (not necessarily primes) that are expressible as the sum of two squares of positive integers; this was published in 1625. The statement that every prime p of the form is the sum of two squares is sometimes called Girard's theorem. For his part, Fermat wrote an elaborate version of the statement (in which he also gave the number of possible expressions of the powers of p as a sum of two squares) in a letter to Marin Mersenne dated December 25, 1640: for this reason this version of the theorem is sometimes called Fermat's Christmas theorem.
Gaussian primes
Fermat's theorem on sums of two squares is strongly related with the theory of Gaussian primes.
A Gaussian integer is a complex number such that and are integers. The norm of a Gaussian integer is an integer equal to the square of the absolute value of the Gaussian integer. The norm of a product of Gaussian integers is the product of their norms. This is the Diophantus identity, which results immediately from the similar property of the absolute value.
Gaussian integers form a principal ideal domain. This implies that Gaussian primes can be defined similarly as primes numbers, that is as those Gaussian integers that are not the product of two non-units (here the units are and ).
The multiplicative property of the norm implies that a prime number is either a Gaussian prime or the norm of a Gaussian prime. Fermat's theorem asserts that the first case occurs when and that the second case occurs when and The last case is not considered in Fermat's statement, but is trivial, as
Related results
The above point of view on Fermat's theorem is a special case of the theory of factorization of ideals in rings of quadratic integers. In summary, if is the ring of algebraic integers in the quadratic field, then an odd prime number , not dividing , is either a prime element in or the ideal norm of an ideal of which is necessarily prime. Moreover, the law of quadratic reciprocity allows distinguishing the two cases in terms of congruences. If is a principal ideal domain, then is an ideal norm if and only
with and both integers.
In a letter to Blaise Pascal dated September 25, 1654 Fermat announced the following two results that are essentially the special cases and If is an odd prime, then
Fermat wrote also:
If two primes which end in 3 or 7 and surpass by 3 a multiple of 4 are multiplied, then their product will be composed of a square and the quintuple of another square.
In other words, if are of the form or , then . Euler later extended this to the conjecture that
Both Fermat's assertion and Euler's conjecture were established by Joseph-Louis Lagrange. This more complicated formulation relies on the fact that is not a principal ideal domain, unlike and
Algorithm
There is a trivial algorithm for decomposing a prime of the form into a sum of two squares: For all such , test whether the square root of is an integer. If this is the case, one has got the decomposition.
However the input size of the algorithm is the number of digits of (up to a constant factor that depends on the numeral base). The number of needed tests is of the order of and thus exponential in the input size. So the computational complexity of this algorithm is exponential.
A Las Vegas algorithm with a probabilistically polynomial complexity has been described by
Stan Wagon in 1990, based on work by Serret and Hermite (1848), and Cornacchia (1908).
The probabilistic part consists in finding a quadratic non-residue, which can be done with success probability and then iterated if not successful. Conditionally this can also be done in deterministic polynomial time if the generalized Riemann hypothesis holds as explained for the Tonelli–Shanks algorithm.
Description
Given an odd prime in the form , first find such that .
This can be done by finding a quadratic non-residue modulo , say , and letting
.
Such an will satisfy the condition since quadratic non-residues satisfy .
Once is determined, one can apply the Euclidean algorithm with and . Denote the first two remainders that are less than the square root of as and . Then it will be the case that .
In the Euclidean algorithm, we have a sequence of remainders that end with the
greatest common divisor .
We compute these recursively with initial values :</p>
We can define another sequence by the same recurrence, but with initial values , :</p>
It turns out that the sequence is just the reverse of the sequence , up to signs.</p>
Moreover, one can see using the recurrence that for all .
Square this equation and use to get .
From there we just need to find the and that are the right size so that .
Example
Take . A possible quadratic non-residue for 97 is 13, since . so we let .
The Euclidean algorithm applied to 97 and 22 yields:
The first two remainders smaller than the square root of 97 are 9 and 4; and indeed we have , as expected.
Proofs
Fermat usually did not write down proofs of his claims, and he did not provide a proof of this statement. The first proof was found by Euler after much effort and is based on infinite descent. He announced it in two letters to Goldbach, on May 6, 1747 and on April 12, 1749; he published the detailed proof in two articles (between 1752 and 1755). Lagrange gave a proof in 1775 that was based on his study of quadratic forms. This proof was simplified by Gauss in his Disquisitiones Arithmeticae (art. 182). Dedekind gave at least two proofs based on the arithmetic of the Gaussian integers. There is an elegant proof using Minkowski's theorem about convex sets. Simplifying an earlier short proof due to Heath-Brown (who was inspired by Liouville's idea), Zagier presented a non-constructive one-sentence proof in 1990.
And more recently Christopher gave a partition-theoretic proof.
Euler's proof by infinite descent
Euler succeeded in proving Fermat's theorem on sums of two squares in 1749, when he was forty-two years old. He communicated this in a letter to Goldbach dated 12 April 1749. The proof relies on infinite descent, and is only briefly sketched in the letter. The full proof consists in five steps and is published in two papers. The first four steps are Propositions 1 to 4 of the first paper and do not correspond exactly to the four steps below. The fifth step below is from the second paper.
For the avoidance of ambiguity, zero will always be a valid possible constituent of "sums of two squares", so for example every square of an integer is trivially expressible as the sum of two squares by setting one of them to be zero.
1. The product of two numbers, each of which is a sum of two squares, is itself a sum of two squares.
This is a well-known property, based on the identity
due to Diophantus.
2. If a number which is a sum of two squares is divisible by a prime which is a sum of two squares, then the quotient is a sum of two squares.
(This is Euler's first Proposition).
Indeed, suppose for example that is divisible by and that this latter is a prime. Then divides
Since is a prime, it divides one of the two factors. Suppose that it divides . Since
(Diophantus's identity) it follows that must divide . So the equation can be divided by the square of . Dividing the expression by yields:
and thus expresses the quotient as a sum of two squares, as claimed.
On the other hand if divides , a similar argument holds by using the following variant of Diophantus's identity:
3. If a number which can be written as a sum of two squares is divisible by a number which is not a sum of two squares, then the quotient has a factor which is not a sum of two squares. (This is Euler's second Proposition).
Suppose is a number not expressible as a sum of two squares, which divides . Write the quotient, factored into its (possibly repeated) prime factors, as so that . If all factors can be written as sums of two squares, then we can divide successively by , , etc., and applying step (2.) above we deduce that each successive, smaller, quotient is a sum of two squares. If we get all the way down to then itself would have to be equal to the sum of two squares, which is a contradiction. So at least one of the primes is not the sum of two squares.
4. If and are relatively prime positive integers then every factor of is a sum of two squares.
(This is the step that uses step (3.) to produce an 'infinite descent' and was Euler's Proposition 4. The proof sketched below also includes the proof of his Proposition 3).
Let be relatively prime positive integers: without loss of generality is not itself prime, otherwise there is nothing to prove. Let therefore be a proper factor of , not necessarily prime: we wish to show that is a sum of two squares. Again, we lose nothing by assuming since the case is obvious.
Let be non-negative integers such that are the closest multiples of (in absolute value) to respectively. Notice that the differences and are integers of absolute value strictly less than : indeed, when is even, gcd; otherwise since gcd, we would also have gcd.
Multiplying out we obtain
uniquely defining a non-negative integer . Since divides both ends of this equation sequence it follows that must also be divisible by : say . Let be the gcd of and which by the co-primeness of is relatively prime to . Thus divides , so writing , and , we obtain the expression for relatively prime and , and with , since
Now finally, the descent step: if is not the sum of two squares, then by step (3.) there must be a factor say of which is not the sum of two squares. But and so repeating these steps (initially with in place of , and so on ad infinitum) we shall be able to find a strictly decreasing infinite sequence of positive integers which are not themselves the sums of two squares but which divide into a sum of two relatively prime squares. Since such an infinite descent is impossible, we conclude that must be expressible as a sum of two squares, as claimed.
5. Every prime of the form is a sum of two squares.
(This is the main result of Euler's second paper).
If , then by Fermat's Little Theorem each of the numbers is congruent to one modulo . The differences are therefore all divisible by . Each of these differences can be factored as
Since is prime, it must divide one of the two factors. If in any of the cases it divides the first factor, then by the previous step we conclude that is itself a sum of two squares (since and differ by , they are relatively prime). So it is enough to show that cannot always divide the second factor. If it divides all differences , then it would divide all differences of successive terms, all differences of the differences, and so forth. Since the th differences of the sequence are all equal to (Finite difference), the th differences would all be constant and equal to , which is certainly not divisible by . Therefore, cannot divide all the second factors which proves that is indeed the sum of two squares.
Lagrange's proof through quadratic forms
Lagrange completed a proof in 1775 based on his general theory of integral quadratic forms. The following presentation incorporates a slight simplification of his argument, due to Gauss, which appears in article 182 of the Disquisitiones Arithmeticae.
An (integral binary) quadratic form is an expression of the form with integers. A number is said to be represented by the form if there exist integers such that . Fermat's theorem on sums of two squares is then equivalent to the statement that a prime is represented by the form (i.e., , ) exactly when is congruent to modulo .
The discriminant of the quadratic form is defined to be . The discriminant of is then equal to .
Two forms and are equivalent if and only if there exist substitutions with integer coefficients
with such that, when substituted into the first form, yield the second. Equivalent forms are readily seen to have the same discriminant, and hence also the same parity for the middle coefficient , which coincides with the parity of the discriminant. Moreover, it is clear that equivalent forms will represent exactly the same integers, because these kind of substitutions can be reversed by substitutions of the same kind.
Lagrange proved that all positive definite forms of discriminant −4 are equivalent. Thus, to prove Fermat's theorem it is enough to find any positive definite form of discriminant −4 that represents . For example, one can use a form
where the first coefficient a = was chosen so that the form represents by setting x = 1, and y = 0, the coefficient b = 2m is an arbitrary even number (as it must be, to get an even discriminant), and finally is chosen so that the discriminant is equal to −4, which guarantees that the form is indeed equivalent to . Of course, the coefficient must be an integer, so the problem is reduced to finding some integer m such that divides : or in other words, a 'square root of -1 modulo ' .
We claim such a square root of is given by . Firstly it follows from Euclid's Fundamental Theorem of Arithmetic that . Consequently, : that is, are their own inverses modulo and this property is unique to them. It then follows from the validity of Euclidean division in the integers, and the fact that is prime, that for every the gcd of and may be expressed via the Euclidean algorithm yielding a unique and distinct inverse of modulo . In particular therefore the product of all non-zero residues modulo is . Let : from what has just been observed, . But by definition, since each term in may be paired with its negative in , , which since is odd shows that , as required.
Dedekind's two proofs using Gaussian integers
Richard Dedekind gave at least two proofs of Fermat's theorem on sums of two squares, both using the arithmetical properties of the Gaussian integers, which are numbers of the form , where a and b are integers, and i is the square root of −1. One appears in section 27 of his exposition of ideals published in 1877; the second appeared in Supplement XI to Peter Gustav Lejeune Dirichlet's Vorlesungen über Zahlentheorie, and was published in 1894.
1. First proof. If is an odd prime number, then we have in the Gaussian integers. Consequently, writing a Gaussian integer with and applying the Frobenius automorphism in Z[i]/(p), one finds
since the automorphism fixes the elements of Z/(p). In the current case, for some integer n, and so in the above expression for ωp, the exponent of −1 is even. Hence the right hand side equals ω, so in this case the Frobenius endomorphism of Z[i]/(p) is the identity.
Kummer had already established that if is the order of the Frobenius automorphism of Z[i]/(p), then the ideal in Z[i] would be a product of 2/f distinct prime ideals. (In fact, Kummer had established a much more general result for any extension of Z obtained by adjoining a primitive m-th root of unity, where m was any positive integer; this is the case of that result.) Therefore, the ideal (p) is the product of two different prime ideals in Z[i]. Since the Gaussian integers are a Euclidean domain for the norm function , every ideal is principal and generated by a nonzero element of the ideal of minimal norm. Since the norm is multiplicative, the norm of a generator of one of the ideal factors of (p) must be a strict divisor of , so that we must have , which gives Fermat's theorem.
2. Second proof. This proof builds on Lagrange's result that if is a prime number, then there must be an integer m such that is divisible by p (we can also see this by Euler's criterion); it also uses the fact that the Gaussian integers are a unique factorization domain (because they are a Euclidean domain). Since does not divide either of the Gaussian integers and (as it does not divide their imaginary parts), but it does divide their product , it follows that cannot be a prime element in the Gaussian integers. We must therefore have a nontrivial factorization of p in the Gaussian integers, which in view of the norm can have only two factors (since the norm is multiplicative, and , there can only be up to two factors of p), so it must be of the form for some integers and . This immediately yields that .
Proof by Minkowski's Theorem
For congruent to mod a prime, is a quadratic residue mod by Euler's criterion. Therefore, there exists an integer such that divides . Let be the standard basis elements for the vector space and set and . Consider the lattice . If then . Thus divides for any .
The area of the fundamental parallelogram of the lattice is . The area of the open disk, , of radius centered around the origin is . Furthermore, is convex and symmetrical about the origin. Therefore, by Minkowski's theorem there exists a nonzero vector such that . Both and so . Hence is the sum of the squares of the components of .
Zagier's "one-sentence proof"
Let be prime, let denote the natural numbers (with or without zero), and consider the finite set of triples of numbers.
Then has two involutions: an obvious one whose fixed points correspond to representations of as a sum of two squares, and a more complicated one,
which has exactly one fixed point . This proves that the cardinality of is odd. Hence, has also a fixed point with respect to the obvious involution.
This proof, due to Zagier, is a simplification of an earlier proof by Heath-Brown, which in turn was inspired by a proof of Liouville. The technique of the proof is a combinatorial analogue of the topological principle that the Euler characteristics of a topological space with an involution and of its fixed-point set have the same parity and is reminiscent of the use of sign-reversing involutions in the proofs of combinatorial bijections.
This proof is equivalent to a geometric or "visual" proof using "windmill" figures, given by Alexander Spivak in 2006 and described in this MathOverflow post by Moritz Firsching and this YouTube video by Mathologer.
Proof with partition theory
In 2016, A. David Christopher gave a partition-theoretic proof by considering partitions of the odd prime having exactly two sizes , each occurring exactly times, and by showing that at least one such partition exists if is congruent to 1 modulo 4.
See also
Legendre's three-square theorem
Lagrange's four-square theorem
Landau–Ramanujan constant
Thue's lemma
Friedlander–Iwaniec theorem
References
*Richard Dedekind, The theory of algebraic integers.
L. E. Dickson. History of the Theory of Numbers Vol. 2. Chelsea Publishing Co., New York 1920
Harold M. Edwards, Fermat's Last Theorem. A genetic introduction to algebraic number theory. Graduate Texts in Mathematics no. 50, Springer-Verlag, NY, 1977.
C. F. Gauss, Disquisitiones Arithmeticae (English Edition). Transl. by Arthur A. Clarke. Springer-Verlag, 1986.
D. R. Heath-Brown, Fermat's two squares theorem. Invariant, 11 (1984) pp. 3–5.
John Stillwell, Introduction to Theory of Algebraic Integers by Richard Dedekind. Cambridge Mathematical Library, Cambridge University Press, 1996.
Don Zagier, A one-sentence proof that every prime p ≡ 1 mod 4 is a sum of two squares. Amer. Math. Monthly 97 (1990), no. 2, 144,
Notes
External links
Two more proofs at PlanetMath.org
Fermat's two squares theorem, D. R. Heath-Brown, 1984.
Polster, Burkard (2019) "Fermat's Christmas theorem: Visualising the hidden circle in π/4 = 1 − 1/3 + 1/5 − 1/7 + ..." (Video). Mathologer.
Additive number theory
Squares in number theory
Theorems in number theory | Fermat's theorem on sums of two squares | [
"Mathematics"
] | 4,801 | [
"Theorems in number theory",
"Mathematical problems",
"Mathematical theorems",
"Squares in number theory",
"Number theory"
] |
253,251 | https://en.wikipedia.org/wiki/Dimension%20of%20an%20algebraic%20variety | In mathematics and specifically in algebraic geometry, the dimension of an algebraic variety may be defined in various equivalent ways.
Some of these definitions are of geometric nature, while some other are purely algebraic and rely on commutative algebra. Some are restricted to algebraic varieties while others apply also to any algebraic set. Some are intrinsic, as independent of any embedding of the variety into an affine or projective space, while other are related to such an embedding.
Dimension of an affine algebraic set
Let be a field, and be an algebraically closed extension.
An affine algebraic set is the set of the common zeros in of the elements of an ideal in a polynomial ring Let be the K-algebra of the polynomial functions over . The dimension of is any of the following integers. It does not change if is enlarged, if is replaced by another algebraically closed extension of and if is replaced by another ideal having the same zeros (that is having the same radical). The dimension is also independent of the choice of coordinates; in other words it does not change if the are replaced by linearly independent linear combinations of them.
The dimension of is
The maximal length of the chains of distinct nonempty (irreducible) subvarieties of .
This definition generalizes a property of the dimension of a Euclidean space or a vector space. It is thus probably the definition that gives the easiest intuitive description of the notion.
The Krull dimension of the coordinate ring .
This is the transcription of the preceding definition in the language of commutative algebra, the Krull dimension being the maximal length of the chains of prime ideals of .
The maximal Krull dimension of the local rings at the points of .
This definition shows that the dimension is a local property if is irreducible. If is irreducible, it turns out that all the local rings at points of have the same Krull dimension (see ); thus:
If is a variety, the Krull dimension of the local ring at any point of
This rephrases the previous definition into a more geometric language.
The maximal dimension of the tangent vector spaces at the non singular points of .
This relates the dimension of a variety to that of a differentiable manifold. More precisely, if if defined over the reals, then the set of its real regular points, if it is not empty, is a differentiable manifold that has the same dimension as a variety and as a manifold.
If is a variety, the dimension of the tangent vector space at any non singular point of .
This is the algebraic analogue to the fact that a connected manifold has a constant dimension. This can also be deduced from the result stated below the third definition, and the fact that the dimension of the tangent space is equal to the Krull dimension at any non-singular point (see Zariski tangent space).
The number of hyperplanes or hypersurfaces in general position which are needed to have an intersection with which is reduced to a nonzero finite number of points.
This definition is not intrinsic as it apply only to algebraic sets that are explicitly embedded in an affine or projective space.
The maximal length of a regular sequence in the coordinate ring .
This the algebraic translation of the preceding definition.
The difference between and the maximal length of the regular sequences contained in .
This is the algebraic translation of the fact that the intersection of general hypersurfaces is an algebraic set of dimension .
The degree of the Hilbert polynomial of .
The degree of the denominator of the Hilbert series of .
This allows, through a Gröbner basis computation to compute the dimension of the algebraic set defined by a given system of polynomial equations. Moreover, the dimension is not changed if the polynomials of the Gröbner basis are replaced with their leading monomials, and if these leading monomials are replaced with their radical (monomials obtained by removing exponents). So:
The Krull dimension of the Stanley–Reisner ring where is the radical of the initial ideal of for any admissible monomial ordering (the initial ideal of is the set of all leading monomials of elements of ).
The dimension of the simplicial complex defined by this Stanley–Reisner ring.
If is a prime ideal (i.e. is an algebraic variety), the transcendence degree over of the field of fractions of .
This allows to prove easily that the dimension is invariant under birational equivalence.
Dimension of a projective algebraic set
Let be a projective algebraic set defined as the set of the common zeros of a homogeneous ideal in a polynomial ring over a field , and let be the graded algebra of the polynomials over V.
All the definitions of the previous section apply, with the change that, when or appear explicitly in the definition, the value of the dimension must be reduced by one. For example, the dimension of is one less than the Krull dimension of .
Computation of the dimension
Given a system of polynomial equations over an algebraically closed field , it may be difficult to compute the dimension of the algebraic set that it defines.
Without further information on the system, there is only one practical method, which consists of computing a Gröbner basis and deducing the degree of the denominator of the Hilbert series of the ideal generated by the equations.
The second step, which is usually the fastest, may be accelerated in the following way: Firstly, the Gröbner basis is replaced by the list of its leading monomials (this is already done for the computation of the Hilbert series). Then each monomial like is replaced by the product of the variables in it: Then the dimension is the maximal size of a subset S of the variables, such that none of these products of variables depends only on the variables in S.
This algorithm is implemented in several computer algebra systems. For example in Maple, this is the function Groebner[HilbertDimension], and in Macaulay2, this is the function dim.
Real dimension
The real dimension'' of a set of real points, typically a semialgebraic set, is the dimension of its Zariski closure. For a semialgebraic set , the real dimension is one of the following equal integers:
The real dimension of is the dimension of its Zariski closure.
The real dimension of is the maximal integer such that there is a homeomorphism of in .
The real dimension of is the maximal integer such that there is a projection of over a -dimensional subspace with a non-empty interior.
For an algebraic set defined over the reals (that is defined by polynomials with real coefficients), it may occur that the real dimension of the set of its real points is smaller than its dimension as a semi algebraic set. For example, the algebraic surface of equation is an algebraic variety of dimension two, which has only one real point (0, 0, 0), and thus has the real dimension zero.
The real dimension is more difficult to compute than the algebraic dimension.
For the case of a real hypersurface (that is the set of real solutions of a single polynomial equation), there exists a probabilistic algorithm to compute its real dimension.
See also
Dimension (vector space)
Dimension theory (algebra)
Dimension of a scheme
References
Algebraic varieties
Dimension
Computer algebra | Dimension of an algebraic variety | [
"Physics",
"Mathematics",
"Technology"
] | 1,497 | [
"Geometric measurement",
"Physical quantities",
"Computer algebra",
"Computational mathematics",
"Computer science",
"Theory of relativity",
"Dimension",
"Algebra"
] |
253,272 | https://en.wikipedia.org/wiki/Michaelis%E2%80%93Menten%20kinetics | In biochemistry, Michaelis–Menten kinetics, named after Leonor Michaelis and Maud Menten, is the simplest case of enzyme kinetics, applied to enzyme-catalysed reactions of one substrate and one product. It takes the form of a differential equation describing the reaction rate (rate of formation of product P, with concentration ) to , the concentration of the substrate A (using the symbols recommended by the IUBMB). Its formula is given by the Michaelis–Menten equation:
, which is often written as , represents the limiting rate approached by the system at saturating substrate concentration for a given enzyme concentration. The Michaelis constant is defined as the concentration of substrate at which the reaction rate is half of . Biochemical reactions involving a single substrate are often assumed to follow Michaelis–Menten kinetics, without regard to the model's underlying assumptions. Only a small proportion of enzyme-catalysed reactions have just one substrate, but the equation still often applies if only one substrate concentration is varied.
"Michaelis–Menten plot"
The plot of against has often been called a "Michaelis–Menten plot", even recently, but this is misleading, because Michaelis and Menten did not use such a plot. Instead, they plotted against , which has some advantages over the usual ways of plotting Michaelis–Menten data. It has as the dependent variable, and thus does not distort the experimental errors in . Michaelis and Menten did not attempt to estimate directly from the limit approached at high , something difficult to do accurately with data obtained with modern techniques, and almost impossible with their data. Instead they took advantage of the fact that the curve is almost straight in the middle range and has a maximum slope of i.e. . With an accurate value of it was easy to determine from the point on the curve corresponding to .
This plot is virtually never used today for estimating and , but it remains of major interest because it has another valuable property: it allows the properties of isoenzymes catalysing the same reaction, but active in very different ranges of substrate concentration, to be compared on a single plot. For example, the four mammalian isoenzymes of hexokinase are half-saturated by glucose at concentrations ranging from about 0.02 mM for hexokinase A (brain hexokinase) to about 50 mM for hexokinase D ("glucokinase", liver hexokinase), more than a 2000-fold range. It would be impossible to show a kinetic comparison between the four isoenzymes on one of the usual plots, but it is easily done on a semi-logarithmic plot.
Model
A decade before Michaelis and Menten, Victor Henri found that enzyme reactions could be explained by assuming a binding interaction between the enzyme and the substrate. His work was taken up by Michaelis and Menten, who investigated the kinetics of invertase, an enzyme that catalyzes the hydrolysis of sucrose into glucose and fructose. In 1913 they proposed a mathematical model of the reaction. It involves an enzyme E binding to a substrate A to form a complex EA that releases a product P regenerating the original form of the enzyme. This may be represented schematically as
E{} + A <=>[\mathit{k_\mathrm{+1}}][\mathit{k_\mathrm{-1}}] EA ->[k_\ce{cat}] E{} + P
where (forward rate constant), (reverse rate constant), and (catalytic rate constant) denote the rate constants, the double arrows between A (substrate) and EA (enzyme-substrate complex) represent the fact that enzyme-substrate binding is a reversible process, and the single forward arrow represents the formation of P (product).
Under certain assumptions – such as the enzyme concentration being much less than the substrate concentration – the rate of product formation is given by
in which is the initial enzyme concentration. The reaction order depends on the relative size of the two terms in the denominator. At low substrate concentration , so that the rate varies linearly with substrate concentration (first-order kinetics in ). However at higher , with , the reaction approaches independence of (zero-order kinetics in ), asymptotically approaching the limiting rate . This rate, which is never attained, refers to the hypothetical case in which all enzyme molecules are bound to substrate. , known as the turnover number or catalytic constant, normally expressed in s –1, is the limiting number of substrate molecules converted to product per enzyme molecule per unit of time. Further addition of substrate would not increase the rate, and the enzyme is said to be saturated.
The Michaelis constant is not affected by the concentration or purity of an enzyme. Its value depends both on the identity of the enzyme and that of the substrate, as well as conditions such as temperature and pH.
The model is used in a variety of biochemical situations other than enzyme-substrate interaction, including antigen–antibody binding, DNA–DNA hybridization, and protein–protein interaction. It can be used to characterize a generic biochemical reaction, in the same way that the Langmuir equation can be used to model generic adsorption of biomolecular species. When an empirical equation of this form is applied to microbial growth, it is sometimes called a Monod equation.
Michaelis–Menten kinetics have also been applied to a variety of topics outside of biochemical reactions, including alveolar clearance of dusts, the richness of species pools, clearance of blood alcohol, the photosynthesis-irradiance relationship, and bacterial phage infection.
The equation can also be used to describe the relationship between ion channel conductivity and ligand concentration, and also, for example, to limiting nutrients and phytoplankton growth in the global ocean.
Specificity
The specificity constant (also known as the catalytic efficiency) is a measure of how efficiently an enzyme converts a substrate into product. Although it is the ratio of and it is a parameter in its own right, more fundamental than . Diffusion limited enzymes, such as fumarase, work at the theoretical upper limit of , limited by diffusion of substrate into the active site.
If we symbolize the specificity constant for a particular substrate A as the Michaelis–Menten equation can be written in terms of and as follows:
At small values of the substrate concentration this approximates to a first-order dependence of the rate on the substrate concentration:
Conversely it approaches a zero-order dependence on when the substrate concentration is high:
The capacity of an enzyme to distinguish between two competing substrates that both follow Michaelis–Menten kinetics depends only on the specificity constant, and not on either or alone. Putting for substrate and for a competing substrate , then the two rates when both are present simultaneously are as follows:
Although both denominators contain the Michaelis constants they are the same, and thus cancel when one equation is divided by the other:
and so the ratio of rates depends only on the concentrations of the two substrates and their specificity constants.
Nomenclature
As the equation originated with Henri, not with Michaelis and Menten, it is more accurate to call it the Henri–Michaelis–Menten equation, though it was Michaelis and Menten who realized that analysing reactions in terms of initial rates would be simpler, and as a result more productive, than analysing the time course of reaction, as Henri had attempted. Although Henri derived the equation he made no attempt to apply it. In addition, Michaelis and Menten understood the need for buffers to control the pH, but Henri did not.
Applications
Parameter values vary widely between enzymes. Some examples are as follows:
Derivation
Equilibrium approximation
In their analysis, Michaelis and Menten (and also Henri) assumed that the substrate is in instantaneous chemical equilibrium with the complex, which implies
in which e is the concentration of free enzyme (not the total concentration) and x is the concentration of enzyme-substrate complex EA.
Conservation of enzyme requires that
where is now the total enzyme concentration. After combining the two expressions some straightforward algebra leads to the following expression for the concentration of the enzyme-substrate complex:
where is the dissociation constant of the enzyme-substrate complex. Hence the rate equation is the Michaelis–Menten equation,
where corresponds to the catalytic constant and the limiting rate is . Likewise with the assumption of equilibrium the Michaelis constant .
Irreversible first step
When studying urease at about the same time as Michaelis and Menten were studying invertase, Donald Van Slyke and G. E. Cullen made essentially the opposite assumption, treating the first step not as an equilibrium but as an irreversible second-order reaction with rate constant . As their approach is never used today it is sufficient to give their final rate equation:
and to note that it is functionally indistinguishable from the Henri–Michaelis–Menten equation. One cannot tell from inspection of the kinetic behaviour whether is equal to or to or to something else.
Steady-state approximation
G. E. Briggs and J. B. S. Haldane undertook an analysis that harmonized the approaches of Michaelis and Menten and of Van Slyke and Cullen, and is taken as the basic approach to enzyme kinetics today. They assumed that the concentration of the intermediate complex does not change on the time scale over which product formation is measured. This assumption means that . The resulting rate equation is as follows:
where
This is the generalized definition of the Michaelis constant.
Assumptions and limitations
All of the derivations given treat the initial binding step in terms of the law of mass action, which assumes free diffusion through the solution. However, in the environment of a living cell where there is a high concentration of proteins, the cytoplasm often behaves more like a viscous gel than a free-flowing liquid, limiting molecular movements by diffusion and altering reaction rates. Note, however that although this gel-like structure severely restricts large molecules like proteins its effect on small molecules, like many of the metabolites that participate in central metabolism, is very much smaller. In practice, therefore, treating the movement of substrates in terms of diffusion is not likely to produce major errors. Nonetheless, Schnell and Turner consider that is more appropriate to model the cytoplasm as a fractal, in order to capture its limited-mobility kinetics.
Estimation of Michaelis–Menten parameters
Graphical methods
Determining the parameters of the Michaelis–Menten equation typically involves running a series of enzyme assays at varying substrate concentrations , and measuring the initial reaction rates , i.e. the reaction rates are measured after a time period short enough for it to be assumed that the enzyme-substrate complex has formed, but that the substrate concentration remains almost constant, and so the equilibrium or quasi-steady-state approximation remain valid. By plotting reaction rate against concentration, and using nonlinear regression of the Michaelis–Menten equation with correct weighting based on known error distribution properties of the rates, the parameters may be obtained.
Before computing facilities to perform nonlinear regression became available, graphical methods involving linearisation of the equation were used. A number of these were proposed, including the Eadie–Hofstee plot of against , the Hanes plot of against , and the Lineweaver–Burk plot (also known as the double-reciprocal plot) of against . Of these, the Hanes plot is the most accurate when is subject to errors with uniform standard deviation. From the point of view of visualizaing the data the Eadie–Hofstee plot has an important property: the entire possible range of values from to occupies a finite range of ordinate scale, making it impossible to choose axes that conceal a poor experimental design.
However, while useful for visualization, all three linear plots distort the error structure of the data and provide less precise estimates of and than correctly weighted non-linear regression. Assuming an error on , an inverse representation leads to an error of on (Propagation of uncertainty), implying that linear regression of the double-reciprocal plot should include weights of . This was well understood by Lineweaver and Burk, who had consulted the eminent statistician W. Edwards Deming before analysing their data. Unlike nearly all workers since, Burk made an experimental study of the error distribution, finding it consistent with a uniform standard error in , before deciding on the appropriate weights. This aspect of the work of Lineweaver and Burk received virtually no attention at the time, and was subsequently forgotten.
The direct linear plot is a graphical method in which the observations are represented by straight lines in parameter space, with axes and : each line is drawn with an intercept of on the axis and on the axis. The point of intersection of the lines for different observations yields the values of and .
Weighting
Many authors, for example Greco and Hakala, have claimed that non-linear regression is always superior to regression of the linear forms of the Michaelis–Menten equation. However, that is correct only if the appropriate weighting scheme is used, preferably on the basis of experimental investigation, something that is almost never done. As noted above, Burk carried out the appropriate investigation, and found that the error structure of his data was consistent with a uniform standard deviation in . More recent studies found that a uniform coefficient of variation (standard deviation expressed as a percentage) was closer to the truth with the techniques in use in the 1970s. However, this truth may be more complicated than any dependence on alone can represent.
Uniform standard deviation of . If the rates are considered to have a uniform standard deviation the appropriate weight for every value for non-linear regression is 1. If the double-reciprocal plot is used each value of should have a weight of , whereas if the Hanes plot is used each value of should have a weight of .
Uniform coefficient variation of . If the rates are considered to have a uniform coefficient variation the appropriate weight for every value for non-linear regression is . If the double-reciprocal plot is used each value of should have a weight of , whereas if the Hanes plot is used each value of should have a weight of .
Ideally the in each of these cases should be the true value, but that is always unknown. However, after a preliminary estimation one can use the calculated values for refining the estimation. In practice the error structure of enzyme kinetic data is very rarely investigated experimentally, therefore almost never known, but simply assumed. It is, however, possible to form an impression of the error structure from internal evidence in the data. This is tedious to do by hand, but can readily be done in the computer.
Closed form equation
Santiago Schnell and Claudio Mendoza suggested a closed form solution for the time course kinetics analysis of the Michaelis–Menten kinetics based on the solution of the Lambert W function.
Namely,
where W is the Lambert W function and
The above equation, known nowadays as the Schnell-Mendoza equation, has been used to estimate and from time course data.
Reactions with more than one substrate
Only a small minority of enzyme-catalysed reactions have just one substrate, and even the number is increased by treating two-substrate reactions in which one substrate is water as one-substrate reactions the number is still small. One might accordingly suppose that the Michaelis–Menten equation, normally written with just one substrate, is of limited usefulness. This supposition is misleading, however. One of the common equations for a two-substrate reaction can be written as follows to express in terms of two substrate concentrations and :
the other symbols represent kinetic constants. Suppose now that is varied with held constant. Then it is convenient to reorganize the equation as follows:
This has exactly the form of the Michaelis–Menten equation
with apparent values and defined as follows:
Linear inhibition
The linear (simple) types of inhibition can be classified in terms of the general equation for mixed inhibition at an inhibitor concentration :
in which is the competitive inhibition constant and is the uncompetitive inhibition constant. This equation includes the other types of inhibition as special cases:
If the second parenthesis in the denominator approaches and the resulting behaviour is competitive inhibition.
If the first parenthesis in the denominator approaches and the resulting behaviour is uncompetitive inhibition.
If both and are finite the behaviour is mixed inhibition.
If the resulting special case is pure non-competitive inhibition.
Pure non-competitive inhibition is very rare, being mainly confined to effects of protons and some metal ions. Cleland recognized this, and he redefined noncompetitive to mean mixed. Some authors have followed him in this respect, but not all, so when reading any publication one needs to check what definition the authors are using.
In all cases the kinetic equations have the form of the Michaelis–Menten equation with apparent constants, as can be seen by writing the equation above as follows:
with apparent values and defined as follows:
See also
Direct linear plot
Eadie–Hofstee plot
Enzyme kinetics
Functional response (ecology)
Gompertz function
Hanes plot
Hill equation
Hill contribution to Langmuir equation
Langmuir adsorption model (equation with the same mathematical form)
Lineweaver–Burk plot
Monod equation (equation with the same mathematical form)
Reaction progress kinetic analysis
Reversible Michaelis–Menten kinetics
Steady state
Victor Henri, who first wrote the general equation form in 1901
Von Bertalanffy function
References
External links
Online Vmax calculator (ic50.tk/kmvmax.html) based on the C programming language and the non-linear least-squares Levenberg–Marquardt algorithm of gnuplot
Alternative online calculator (ic50.org/kmvmax.html) based on Python, NumPy, Matplotlib and the non-linear least-squares Levenberg–Marquardt algorithm of SciPy
Further reading
Enzyme kinetics
Chemical kinetics
Ordinary differential equations
Catalysis | Michaelis–Menten kinetics | [
"Chemistry"
] | 3,730 | [
"Catalysis",
"Chemical kinetics",
"Chemical reaction engineering",
"Enzyme kinetics"
] |
253,418 | https://en.wikipedia.org/wiki/DNA%20computing | DNA computing is an emerging branch of unconventional computing which uses DNA, biochemistry, and molecular biology hardware, instead of the traditional electronic computing. Research and development in this area concerns theory, experiments, and applications of DNA computing. Although the field originally started with the demonstration of a computing application by Len Adleman in 1994, it has now been expanded to several other avenues such as the development of storage technologies, nanoscale imaging modalities, synthetic controllers and reaction networks, etc.
History
Leonard Adleman of the University of Southern California initially developed this field in 1994. Adleman demonstrated a proof-of-concept use of DNA as a form of computation which solved the seven-point Hamiltonian path problem. Since the initial Adleman experiments, advances have occurred and various Turing machines have been proven to be constructible.
Since then the field has expanded into several avenues. In 1995, the idea for DNA-based memory was proposed by Eric Baum who conjectured that a vast amount of data can be stored in a tiny amount of DNA due to its ultra-high density. This expanded the horizon of DNA computing into the realm of memory technology although the in vitro demonstrations were made after almost a decade.
The field of DNA computing can be categorized as a sub-field of the broader DNA nanoscience field started by Ned Seeman about a decade before Len Adleman's demonstration. Ned's original idea in the 1980s was to build arbitrary structures using bottom-up DNA self-assembly for applications in crystallography. However, it morphed into the field of structural DNA self-assembly which as of 2020 is extremely sophisticated. Self-assembled structure from a few nanometers tall all the way up to several tens of micrometers in size have been demonstrated in 2018.
In 1994, Prof. Seeman's group demonstrated early DNA lattice structures using a small set of DNA components. While the demonstration by Adleman showed the possibility of DNA-based computers, the DNA design was trivial because as the number of nodes in a graph grows, the number of DNA components required in Adleman's implementation would grow exponentially. Therefore, computer scientists and biochemists started exploring tile-assembly where the goal was to use a small set of DNA strands as tiles to perform arbitrary computations upon growth. Other avenues that were theoretically explored in the late 90's include DNA-based security and cryptography, computational capacity of DNA systems, DNA memories and disks, and DNA-based robotics.
Before 2002, Lila Kari showed that the DNA operations performed by genetic recombination in some organisms are Turing complete.
In 2003, John Reif's group first demonstrated the idea of a DNA-based walker that traversed along a track similar to a line follower robot. They used molecular biology as a source of energy for the walker. Since this first demonstration, a wide variety of DNA-based walkers have been demonstrated.
Applications, examples, and recent developments
In 1994 Leonard Adleman presented the first prototype of a DNA computer. The TT-100 was a test tube filled with 100 microliters of a DNA solution. He managed to solve an instance of the directed Hamiltonian path problem. In Adleman's experiment, the Hamiltonian Path Problem was implemented notationally as the "travelling salesman problem". For this purpose, different DNA fragments were created, each one of them representing a city that had to be visited. Every one of these fragments is capable of a linkage with the other fragments created. These DNA fragments were produced and mixed in a test tube. Within seconds, the small fragments form bigger ones, representing the different travel routes. Through a chemical reaction, the DNA fragments representing the longer routes were eliminated. The remains are the solution to the problem, but overall, the experiment lasted a week. However, current technical limitations prevent the evaluation of the results. Therefore, the experiment isn't suitable for the application, but it is nevertheless a proof of concept.
Combinatorial problems
First results to these problems were obtained by Leonard Adleman.
In 1994: Solving a Hamiltonian path in a graph with seven summits.
In 2002: Solving a NP-complete problem as well as a 3-SAT problem with 20 variables.
Tic-tac-toe game
In 2002, J. Macdonald, D. Stefanović and M. Stojanović created a DNA computer able to play tic-tac-toe against a human player. The calculator consists of nine bins corresponding to the nine squares of the game. Each bin contains a substrate and various combinations of DNA enzymes. The substrate itself is composed of a DNA strand onto which was grafted a fluorescent chemical group at one end, and the other end, a repressor group. Fluorescence is only active if the molecules of the substrate are cut in half. The DNA enzymes simulate logical functions. For example, such a DNA will unfold if two specific types of DNA strand are introduced to reproduce the logic function AND.
By default, the computer is considered to have played first in the central square. The human player starts with eight different types of DNA strands corresponding to the eight remaining boxes that may be played. To play box number i, the human player pours into all bins the strands corresponding to input #i. These strands bind to certain DNA enzymes present in the bins, resulting, in one of these bins, in the deformation of the DNA enzymes which binds to the substrate and cuts it. The corresponding bin becomes fluorescent, indicating which box is being played by the DNA computer. The DNA enzymes are divided among the bins in such a way as to ensure that the best the human player can achieve is a draw, as in real tic-tac-toe.
Neural network based computing
Kevin Cherry and Lulu Qian at Caltech developed a DNA-based artificial neural network that can recognize 100-bit hand-written digits. They achieved this by programming on a computer in advance with the appropriate set of weights represented by varying concentrations weight molecules which are later added to the test tube that holds the input DNA strands.
Improved speed with Localized (cache-like) Computing
One of the challenges of DNA computing is its slow speed. While DNA is a biologically compatible substrate, i.e., it can be used at places where silicon technology cannot, its computational speed is still very slow. For example, the square-root circuit used as a benchmark in the field takes over 100 hours to complete. While newer ways with external enzyme sources are reporting faster and more compact circuits, Chatterjee et al. demonstrated an interesting idea in the field to speed up computation through localized DNA circuits, a concept being further explored by other groups. This idea, while originally proposed in the field of computer architecture, has been adopted in this field as well. In computer architecture, it is very well-known that if the instructions are executed in sequence, having them loaded in the cache will inevitably lead to fast performance, also called the principle of localization. This is because with instructions in fast cache memory, there is no need swap them in and out of main memory, which can be slow. Similarly, in localized DNA computing, the DNA strands responsible for computation are fixed on a breadboard-like substrate ensuring physical proximity of the computing gates. Such localized DNA computing techniques have been shown to potentially reduce the computation time by orders of magnitude.
Renewable (or reversible) DNA computing
Subsequent research on DNA computing has produced reversible DNA computing, bringing the technology one step closer to the silicon-based computing used in (for example) PCs. In particular, John Reif and his group at Duke University have proposed two different techniques to reuse the computing DNA complexes. The first design uses dsDNA gates, while the second design uses DNA hairpin complexes.
While both designs face some issues (such as reaction leaks), this appears to represent a significant breakthrough in the field of DNA computing. Some other groups have also attempted to address the gate reusability problem.
Using strand displacement reactions (SRDs), reversible proposals are presented in the "Synthesis Strategy of Reversible Circuits on DNA Computers" paper for implementing reversible gates and circuits on DNA computers by combining DNA computing and reversible computing techniques. This paper also proposes a universal reversible gate library (URGL) for synthesizing n-bit reversible circuits on DNA computers with an average length and cost of the constructed circuits better than the previous methods.
Methods
There are multiple methods for building a computing device based on DNA, each with its own advantages and disadvantages. Most of these build the basic logic gates (AND, OR, NOT) associated with digital logic from a DNA basis. Some of the different bases include DNAzymes, deoxyoligonucleotides, enzymes, and toehold exchange.
Strand displacement mechanisms
The most fundamental operation in DNA computing and molecular programming is the strand displacement mechanism. Currently, there are two ways to perform strand displacement:
Toehold mediated strand displacement (TMSD)
Polymerase-based strand displacement (PSD)
Toehold exchange
Besides simple strand displacement schemes, DNA computers have also been constructed using the concept of toehold exchange. In this system, an input DNA strand binds to a sticky end, or toehold, on another DNA molecule, which allows it to displace another strand segment from the molecule. This allows the creation of modular logic components such as AND, OR, and NOT gates and signal amplifiers, which can be linked into arbitrarily large computers. This class of DNA computers does not require enzymes or any chemical capability of the DNA.
Chemical reaction networks (CRNs)
The full stack for DNA computing looks very similar to a traditional computer architecture. At the highest level, a C-like general purpose programming language is expressed using a set of chemical reaction networks (CRNs). This intermediate representation gets translated to domain-level DNA design and then implemented using a set of DNA strands. In 2010, Erik Winfree's group showed that DNA can be used as a substrate to implement arbitrary chemical reactions. This opened the way to design and synthesis of biochemical controllers since the expressive power of CRNs is equivalent to a Turing machine. Such controllers can potentially be used in vivo for applications such as preventing hormonal imbalance.
DNAzymes
Catalytic DNA (deoxyribozyme or DNAzyme) catalyze a reaction when interacting with the appropriate input, such as a matching oligonucleotide. These DNAzymes are used to build logic gates analogous to digital logic in silicon; however, DNAzymes are limited to one-, two-, and three-input gates with no current implementation for evaluating statements in series.
The DNAzyme logic gate changes its structure when it binds to a matching oligonucleotide and the fluorogenic substrate it is bonded to is cleaved free. While other materials can be used, most models use a fluorescence-based substrate because it is very easy to detect, even at the single molecule limit. The amount of fluorescence can then be measured to tell whether or not a reaction took place. The DNAzyme that changes is then "used", and cannot initiate any more reactions. Because of this, these reactions take place in a device such as a continuous stirred-tank reactor, where old product is removed and new molecules added.
Two commonly used DNAzymes are named E6 and 8-17. These are popular because they allow cleaving of a substrate in any arbitrary location. Stojanovic and MacDonald have used the E6 DNAzymes to build the MAYA I and MAYA II machines, respectively; Stojanovic has also demonstrated logic gates using the 8-17 DNAzyme. While these DNAzymes have been demonstrated to be useful for constructing logic gates, they are limited by the need of a metal cofactor to function, such as Zn2+ or Mn2+, and thus are not useful in vivo.
A design called a stem loop, consisting of a single strand of DNA which has a loop at an end, are a dynamic structure that opens and closes when a piece of DNA bonds to the loop part. This effect has been exploited to create several logic gates. These logic gates have been used to create the computers MAYA I and MAYA II which can play tic-tac-toe to some extent.
Enzymes
Enzyme-based DNA computers are usually of the form of a simple Turing machine; there is analogous hardware, in the form of an enzyme, and software, in the form of DNA.
Benenson, Shapiro and colleagues have demonstrated a DNA computer using the FokI enzyme and expanded on their work by going on to show automata that diagnose and react to prostate cancer: under expression of the genes PPAP2B and GSTP1 and an over expression of PIM1 and HPN. Their automata evaluated the expression of each gene, one gene at a time, and on positive diagnosis then released a single strand DNA molecule (ssDNA) that is an antisense for MDM2. MDM2 is a repressor of protein 53, which itself is a tumor suppressor. On negative diagnosis it was decided to release a suppressor of the positive diagnosis drug instead of doing nothing. A limitation of this implementation is that two separate automata are required, one to administer each drug. The entire process of evaluation until drug release took around an hour to complete. This method also requires transition molecules as well as the FokI enzyme to be present. The requirement for the FokI enzyme limits application in vivo, at least for use in "cells of higher organisms". It should also be pointed out that the 'software' molecules can be reused in this case.
Algorithmic self-assembly
DNA nanotechnology has been applied to the related field of DNA computing. DNA tiles can be designed to contain multiple sticky ends with sequences chosen so that they act as Wang tiles. A DX array has been demonstrated whose assembly encodes an XOR operation; this allows the DNA array to implement a cellular automaton which generates a fractal called the Sierpinski gasket. This shows that computation can be incorporated into the assembly of DNA arrays, increasing its scope beyond simple periodic arrays.
Capabilities
DNA computing is a form of parallel computing in that it takes advantage of the many different molecules of DNA to try many different possibilities at once. For certain specialized problems, DNA computers are faster and smaller than any other computer built so far. Furthermore, particular mathematical computations have been demonstrated to work on a DNA computer.
DNA computing does not provide any new capabilities from the standpoint of computability theory, the study of which problems are computationally solvable using different models of computation.
For example,
if the space required for the solution of a problem grows exponentially with the size of the problem (EXPSPACE problems) on von Neumann machines, it still grows exponentially with the size of the problem on DNA machines.
For very large EXPSPACE problems, the amount of DNA required is too large to be practical.
Alternative technologies
A partnership between IBM and Caltech was established in 2009 aiming at "DNA chips" production. A Caltech group is working on the manufacturing of these nucleic-acid-based integrated circuits. One of these chips can compute whole square roots. A compiler has been written in Perl.
Pros and cons
The slow processing speed of a DNA computer (the response time is measured in minutes, hours or days, rather than milliseconds) is compensated by its potential to make a high amount of multiple parallel computations. This allows the system to take a similar amount of time for a complex calculation as for a simple one. This is achieved by the fact that millions or billions of molecules interact with each other simultaneously. However, it is much harder to analyze the answers given by a DNA computer than by a digital one.
See also
Biocomputer
Chemical computer
Computational gene
DNA code construction
DNA digital data storage
DNA sequencing
Membrane computing
Molecular electronics
Peptide computing
Parallel computing
Quantum computing
Transcriptor
Wetware computer
Molecular logic gate
References
Further reading
— The first general text to cover the whole field.
— The book starts with an introduction to DNA-related matters, the basics of biochemistry and language and computation theory, and progresses to the advanced mathematical theory of DNA computing.
— A new general text to cover the whole field.
External links
DNA modeled computing
How Stuff Works explanation
Dirk de Pol: DNS – Ein neuer Supercomputer?. In: Die Neue Gesellschaft / Frankfurter Hefte , Heft 2/96, Februar 1996, S. 170–172
'DNA computer' cracks code, Physics Web
Ars Technica
- The New York Times DNA Computer for detecting Cancer
Bringing DNA computers to life, in Scientific American
Japanese Researchers store information in bacteria DNA
International Meeting on DNA Computing and Molecular Programming
LiveScience.com-How DNA Could Power Computers
Classes of computers
Models of computation
Molecular biology
DNA
DNA nanotechnology
American inventions | DNA computing | [
"Chemistry",
"Materials_science",
"Technology",
"Biology"
] | 3,473 | [
"DNA nanotechnology",
"Computer systems",
"Molecular biology",
"Biochemistry",
"Nanotechnology",
"Computers",
"Classes of computers"
] |
253,849 | https://en.wikipedia.org/wiki/R-value%20%28insulation%29 | The R-value (in K⋅m2/W) is a measure of how well a two-dimensional barrier, such as a layer of insulation, a window or a complete wall or ceiling, resists the conductive flow of heat, in the context of construction. R-value is the temperature difference per unit of heat flux needed to sustain one unit of heat flux between the warmer surface and colder surface of a barrier under steady-state conditions. The measure is therefore equally relevant for lowering energy bills for heating in the winter, for cooling in the summer, and for general comfort.
The R-value is the building industry term for thermal resistance "per unit area." It is sometimes denoted RSI-value if the SI units are used. An R-value can be given for a material (e.g., for polyethylene foam), or for an assembly of materials (e.g., a wall or a window). In the case of materials, it is often expressed in terms of R-value per metre. R-values are additive for layers of materials, and the higher the R-value the better the performance.
The U-factor or U-value (in W/(m2⋅K)) is the overall heat transfer coefficient and can be found by taking the inverse of the R-value. It is a property that describes how well building elements conduct heat per unit area across a temperature gradient. The elements are commonly assemblies of many layers of materials, such as those that make up the building envelope. It is expressed in watts per square metre kelvin. The higher the U-value, the lower the ability of the building envelope to resist heat transfer. A low U-value, or conversely a high R-value usually indicates high levels of insulation. They are useful as it is a way of predicting the composite behaviour of an entire building element rather than relying on the properties of individual materials.
R-value definition
This relates to the technical/constructional value.
where:
(K⋅m2/W) is the R-value,
(K) is the temperature difference between the warmer surface and colder surface of a barrier,
(W/m2) is the heat flux through the barrier.
The R-value per unit of a barrier's exposed surface area measures the absolute thermal resistance of the barrier.
where:
is the R-value (m2⋅K⋅W−1)
is the barrier's exposed surface area (m2)
is the absolute thermal resistance (K⋅W−1)
Absolute thermal resistance, , quantifies the temperature difference per unit of heat flow rate needed to sustain one unit of heat flow rate. Confusion sometimes arises because some publications use the term thermal resistance for the temperature difference per unit of heat flux, but other publications use the term thermal resistance for the temperature difference per unit of heat flow rate. Further confusion arises because some publications use the character R to denote the temperature difference per unit of heat flux, but other publications use the character R to denote the temperature difference per unit of heat flow rate. This article uses the term absolute thermal resistance for the temperature difference per unit of heat flow rate and uses the term R-value for the temperature difference per unit of heat flux.
In any event, the greater the R-value, the greater the resistance, and so the better the thermal insulating properties of the barrier. R-values are used in describing the effectiveness of insulating material and in analysis of heat flow across assemblies (such as walls, roofs, and windows) under steady-state conditions. Heat flow through a barrier is driven by temperature difference between two sides of the barrier, and the R-value quantifies how effectively the object resists this drive: The temperature difference divided by the R-value and then multiplied by the exposed surface area of the barrier gives the total rate of heat flow through the barrier, as measured in watts or in BTUs per hour.
where:
is the R-value (K⋅m2/W),
is the temperature difference (K) between the warmer surface and colder surface of the barrier,
is the exposed surface area (m2) of the barrier,
is the heat flow rate (W) through the barrier.
As long as the materials involved are dense solids in direct mutual contact, R-values are additive; for example, the total R-value of a barrier composed of several layers of material is the sum of the R-values of the individual
For example, in winter it might be 2 °C outside and 20 °C inside, making a temperature difference of 18 °C or 18 K. If the material has an R-value of 4, it will lose 0.25 W/(°C⋅m2). With an area of 100 m2, the heat energy being lost is There will be other losses through the floor, windows, ventilation slots, etc. But for that material alone, 450 W is going out, and can be replaced with a 450 W heater inside, to maintain the inside temperature.
Usage, units
Note that the R-value is the building industry term for what is in other contexts called "thermal resistance" "for a unit It is sometimes denoted RSI-value if the SI (metric) units are used.
An R-value can be given for a material (e.g., for polyethylene foam), or for an assembly of materials (e.g., a wall or a window). In the case of materials, it is often expressed in terms of R-value per unit length (e.g., per inch of thickness). The latter can be misleading in the case of low-density building thermal insulations, for which R-values are not additive: their R-value per inch is not constant as the material gets thicker, but rather usually decreases.
The units of an R-value (see below) are usually not explicitly stated, and so it is important to determine from context which units are being used: an R-value expressed in I-P (inch-pound) units is about 5.68 times larger than when expressed in SI units, so that, for example, a window that is R-2 in I-P units has an RSI of 0.35 (since 2/5.68 = 0.35). For R-values there is no difference between US customary units and imperial units.
All of the following mean the same thing: "this is an R-2 window"; "this is an R2 "this window has an R-value of 2"; "this is a window with R = 2" (and similarly with RSI-values, which also include the possibility "this window provides RSI 0.35 of resistance to heat flow").
Apparent R-value
The more a material is intrinsically able to conduct heat, as given by its thermal conductivity, the lower its R-value. On the other hand, the thicker the material, the higher its R-value. Sometimes heat transfer processes other than conduction (namely, convection and radiation) significantly contribute to heat transfer within the material. In such cases, it is useful to introduce an "apparent thermal conductivity", which captures the effects of all three kinds of processes, and to define the R-value more generally as the thickness of a sample divided by its apparent thermal conductivity. Some equations relating this generalized R-value, also known as the apparent R-value, to other quantities are:
where:
is the apparent R-value (K/W) across the thickness of the sample,
is the thickness (m) of the sample (measured on a path parallel to the heat flow),
is the apparent thermal conductivity of the material (W/(K⋅m)),
is the thermal transmittance or U-value of the material (W/K),
is the apparent thermal resistivity of the material (K⋅m/W).
An apparent R-value quantifies the physical quantity called thermal insulance.
However, this generalization comes at a price because R-values that include non-conductive processes may no longer be additive and may have significant temperature dependence. In particular, for a loose or porous material, the R-value per inch generally depends on the thickness, almost always so that it decreases with increasing thickness (polyisocyanurate (colloquially, polyiso) being an exception; its R-value/inch increases with thickness). For similar reasons, the R-value per inch also depends on the temperature of the material, usually increasing with decreasing temperature (polyisocyanurate again being an exception); a nominally R-13 fiberglass batt may be R-14 at and R-12 at . Nevertheless, in construction it is common to treat R-values as independent of temperature. Note that an R-value may not account for radiative or convective processes at the material's surface, which may be an important factor for some applications.
The R-value is the reciprocal of the thermal transmittance (U-factor) of a material or assembly. The U.S. construction industry prefers to use R-values, however, because they are additive and because bigger values mean better insulation, neither of which is true for U-factors.
U-factor/U-value
The U-factor or U-value is the overall heat transfer coefficient that describes how well a building element conducts heat or the rate of transfer of heat (in watts) through one square metre of a structure divided by the difference in temperature across the structure. The elements are commonly assemblies of many layers of components such as those that make up walls/floors/roofs etc. It is expressed in watts per meter squared kelvin W/(m2⋅K). This means that the higher the U-value the worse the thermal performance of the building envelope. A low U-value usually indicates high levels of insulation. They are useful as it is a way of predicting the composite behavior of an entire building element rather than relying on the properties of individual materials.
In most countries the properties of specific materials (such as insulation) are indicated by the thermal conductivity, sometimes called a k-value or lambda-value (lowercase λ). The thermal conductivity (k-value) is the ability of a material to conduct heat; hence, the lower the k-value, the better the material is for insulation. Expanded polystyrene (EPS) has a k-value of around 0.033 W/(m⋅K). For comparison, phenolic foam insulation has a k-value of around 0.018 W/(m⋅K), while wood varies anywhere from 0.15 to 0.75 W/(m⋅K), and steel has a k-value of approximately 50.0 W/(m⋅K). These figures vary from product to product, so the UK and EU have established a 90/90 standard which means that 90% of the product will conform to the stated k-value with a 90% confidence level so long as the figure quoted is stated as the 90/90 lambda-value.
U is the inverse of R with SI units of W/(m2⋅K) and U.S. units of BTU/(h⋅°F⋅ft2)
where is the heat flux, is the temperature difference across the material, k is the material's coefficient of thermal conductivity and L is its thickness. In some contexts, U is referred to as unit surface conductance.
The term U-factor is usually used in the U.S. and Canada to express the heat flow through entire assemblies (such as roofs, walls, and windows). For example, energy codes such as ASHRAE 90.1 and the IECC prescribe U-values. However, R-value is widely used in practice to describe the thermal resistance of insulation products, layers, and most other parts of the building enclosure (walls, floors, roofs). Other areas of the world more commonly use U-value/U-factor for elements of the entire building enclosure including windows, doors, walls, roof, and ground slabs.
Units: metric (SI) vs. inch-pound (I-P)
The SI (metric) unit of R-value is
kelvin square-metre per watt (K⋅m2/W or, equally, °C⋅m2/W),
whereas the I-P (inch-pound) unit is
degree Fahrenheit square-foot hour per British thermal unit (°F⋅ft2⋅h/BTU).
For R-values there is no difference between U.S. and Imperial units, so the same I-P unit is used in both.
Some sources use "RSI" when referring to R-values in SI units.
R-values expressed in I-P units are approximately 5.68 times as large as R-values expressed in SI units. For example, a window that is R-2 in the I-P system is about RSI 0.35, since 2/5.68 ≈ 0.35.
In countries where the SI system is generally in use, the R-values will also normally be given in SI units. This includes the United Kingdom, Australia, and New Zealand.
I-P values are commonly given in the United States and Canada, though in Canada normally both I-P and RSI values are listed.
Because the units are usually not explicitly stated, one must decide from context which units are being used. In this regard, it helps to keep in mind that I-P R-values are 5.68 times larger than the corresponding SI R-values.
More precisely,
R-value (in I-P) ≈ RSI-value (in SI) × 5.678263
RSI-value (in SI) ≈ R-value (in I-P) × 0.1761102
Different insulation types
The Australian Government explains that the required total R-values for the building fabric vary depending on climate zone. "Such materials include aerated concrete blocks, hollow expanded polystyrene blocks, straw bales and rendered extruded polystyrene sheets."
In Germany, after the law Energieeinsparverordnung (EnEv) introduced in 2009 (October 10) regarding energy savings, all new buildings must demonstrate an ability to remain within certain boundaries of the U-value for each particular building material. Further, the EnEv describes the maximum coefficient for each new material if parts are replaced or added to standing structures.
The U.S. Department of Energy has recommended R-values for given areas of the USA based on the general local energy costs for heating and cooling, as well as the climate of an area. There are four types of insulation: rolls and batts, loose-fill, rigid foam, and foam-in-place. Rolls and batts are typically flexible insulators that come in fibers, like fiberglass. Loose-fill insulation comes in loose fibers or pellets and should be blown into a space. Rigid foam is more expensive than fiber, but generally has a higher R-value per unit of thickness. Foam-in-place insulation can be blown into small areas to control air leaks, like those around windows, or can be used to insulate an entire house.
Thickness
Increasing the thickness of an insulating layer increases the thermal resistance. For example, doubling the thickness of fiberglass batting will double its R-value, perhaps from 2.0 m2⋅K/W for 110 mm of thickness, up to 4.0 m2⋅K/W for 220 mm of thickness. Heat transfer through an insulating layer is analogous to adding resistance to a series circuit with a fixed voltage. However, this holds only approximately because the effective thermal conductivity of some insulating materials depends on thickness. The addition of materials to enclose the insulation such as drywall and siding provides additional but typically much smaller R-value.
Factors
There are many factors that come into play when using R-values to compute heat loss for a particular wall. Manufacturer R-values apply only to properly installed insulation. Squashing two layers of batting into the thickness intended for one layer will increase but not double the R-value. (In other words, compressing a fiberglass batt decreases the R-value of the batt but increases the R-value per inch.) Another important factor to consider is that studs and windows provide a parallel heat conduction path that is unaffected by the insulation's R-value. The practical implication of this is that one could double the R-value of insulation installed between framing members and realize substantially less than a 50 percent reduction in heat loss. When installed between wall studs, even perfect wall insulation only eliminates conduction through the insulation but leaves unaffected the conductive heat loss through such materials as glass windows and studs. Insulation installed between the studs may reduce, but usually does not eliminate, heat losses due to air leakage through the building envelope. Installing a continuous layer of rigid foam insulation on the exterior side of the wall sheathing will interrupt thermal bridging through the studs while also reducing the rate of air leakage.
Primary role
The R-value is a measure of an insulation sample's ability to reduce the rate of heat flow under specified test conditions. The primary mode of heat transfer impeded by insulation is conduction, but insulation also reduces heat loss by all three heat transfer modes: conduction, convection, and radiation. The primary heat loss across an uninsulated air-filled space is natural convection, which occurs because of changes in air density with temperature. Insulation greatly retards natural convection making conduction the primary mode of heat transfer. Porous insulations accomplish this by trapping air so that significant convective heat loss is eliminated, leaving only conduction and minor radiation transfer. The primary role of such insulation is to make the thermal conductivity of the insulation that of trapped, stagnant air. However this cannot be realized fully because the glass wool or foam needed to prevent convection increases the heat conduction compared to that of still air. The minor radiative heat transfer is obtained by having many surfaces interrupting a "clear view" between the inner and outer surfaces of the insulation such as visible light is interrupted from passing through porous materials. Such multiple surfaces are abundant in batting and porous foam. Radiation is also minimized by low emissivity (highly reflective) exterior surfaces such as aluminum foil. Lower thermal conductivity, or higher R-values, can be achieved by replacing air with argon when practical such as within special closed-pore foam insulation because argon has a lower thermal conductivity than air.
General
Heat transfer through an insulating layer is analogous to electrical resistance. The heat transfers can be worked out by thinking of resistance in series with a fixed potential, except the resistances are thermal resistances and the potential is the difference in temperature from one side of the material to the other. The resistance of each material to heat transfer depends on the specific thermal resistance [R-value]/[unit thickness], which is a property of the material (see table below) and the thickness of that layer. A thermal barrier that is composed of several layers will have several thermal resistors in the analogous with circuits, each in series. Analogous to a set of resistors in parallel, a well insulated wall with a poorly insulated window will allow proportionally more of the heat to go through the (low-R) window, and additional insulation in the wall will only minimally improve the overall R-value. As such, the least well insulated section of a wall will play the largest role in heat transfer relative to its size, similar to the way most current flows through the lowest resistance resistor in a parallel array. Hence ensuring that windows, service breaks (around wires/pipes), doors, and other breaks in a wall are well sealed and insulated is often the most cost effective way to improve the insulation of a structure, once the walls are sufficiently insulated.
Like resistance in electrical circuits, increasing the physical length (for insulation, thickness) of a resistive element, such as graphite for example, increases the resistance linearly; double the thickness of a layer means double the R-value and half the heat transfer; quadruple, quarters; etc. In practice, this linear relationship does not always hold for compressible materials such as glass wool and cotton batting whose thermal properties change when compressed. So, for example, if one layer of fiberglass insulation in an attic provides R-20 thermal resistance, adding on a second layer will not necessarily double the thermal resistance because the first layer will be compressed by the weight of the second.
Calculating heat loss
To find the average heat loss per unit area, simply divide the temperature difference by the R-value for the layer.
If the interior of a home is at 20 °C and the roof cavity is at 10 °C then the temperature difference is 10 °C (or 10 K). Assuming a ceiling insulated to RSI 2.0 (R = 2 m2⋅K/W), energy will be lost at a rate of 10 K / (2 K⋅m2/W) = 5 watts for every square meter (W/m) of ceiling. The RSI-value used here is for the actual insulating layer (and not per unit thickness of insulation).
Relationships
Thickness
R-value should not be confused with the intrinsic property of thermal resistivity and its inverse, thermal conductivity. The SI unit of thermal resistivity is K⋅m/W. Thermal conductivity assumes that the heat transfer of the material is linearly related to its thickness.
Multiple layers
In calculating the R-value of a multi-layered installation, the R-values of the individual layers are added:
R-value(outside air film) + R-value(brick) + R-value(sheathing) + R-value(insulation) + R-value(plasterboard) + R-value(inside air film) = R-value(total).
To account for other components in a wall such as framing, first calculate the U-value (= 1/R-value) of each component, then the area-weighted average U-value. An average R-value is 1/(average U-value). For example, if 10% of the area is 4 inches of softwood (R-value 5.6) and 90% is 2 inches of silica aerogel (R-value 20), the area-weighted U-value is 0.1/5.6 + 0.9/20 ≈ 0.0629 and the weighted R-value is 1/0.0629 ≈ 15.9.
Controversy
Thermal conductivity versus apparent thermal conductivity
Thermal conductivity is conventionally defined as the rate of thermal conduction through a material per unit area per unit thickness per unit temperature differential (ΔT). The inverse of conductivity is resistivity (or R per unit thickness). Thermal conductance is the rate of heat flux through a unit area at the installed thickness and any given ΔT.
Experimentally, thermal conduction is measured by placing the material in contact between two conducting plates and measuring the energy flux required to maintain a certain temperature gradient.
For the most part, testing the R-value of insulation is done at a steady temperature, usually about with no surrounding air movement. Since these are ideal conditions, the listed R-value for insulation will almost certainly be higher than it would be in actual use, because most situations with insulation are under different conditions
A definition of R-value based on apparent thermal conductivity has been proposed in document C168 published by the American Society for Testing and Materials. This describes heat being transferred by all three mechanisms—conduction, radiation, and convection.
Debate remains among representatives from different segments of the U.S. insulation industry during revision of the U.S. FTC's regulations about advertising R-values illustrating the complexity of the issues.
Surface temperature in relationship to mode of heat transfer
There are weaknesses to using a single laboratory model to simultaneously assess the properties of a material to resist conducted, radiated, and convective heating. Surface temperature varies depending on the mode of heat transfer.
If we assume idealized heat transfer between the air on each side and the surface of the insulation, the surface temperature of the insulator would equal the air temperature on each side.
In response to thermal radiation, surface temperature depends on the thermal emissivity of the material. Low-emissivity surfaces such as shiny metal foil will reduce heat transfer by radiation.
Convection will alter the rate of heat transfer between the air and the surface of the insulator, depending on the flow characteristics of the air (or other fluid) in contact with it.
With multiple modes of heat transfer, the final surface temperature (and hence the observed energy flux and calculated R-value) will be dependent on the relative contributions of radiation, conduction, and convection, even though the total energy contribution remains the same.
This is an important consideration in building construction because heat energy arrives in different forms and proportions. The contribution of radiative and conductive heat sources also varies throughout the year and both are important contributors to thermal comfort
In the hot season, solar radiation predominates as the source of heat gain. According to the Stefan–Boltzmann law, radiative heat transfer is related to the fourth power of the absolute temperature (measured in kelvins: T [K] = T [°C] + 273.16). Therefore, such transfer is at its most significant when the objective is to cool (i.e. when solar radiation has produced very warm surfaces). On the other hand, the conductive and convective heat loss modes play a more significant role during the cooler months. At such lower ambient temperatures the traditional fibrous, plastic and cellulose insulations play by far the major role: the radiative heat transfer component is of far less importance, and the main contribution of the radiation barrier is in its superior air-tightness contribution.
In summary: claims for radiant barrier insulation are justifiable at high temperatures, typically when minimizing summer heat transfer; but these claims are not justifiable in traditional winter (keeping-warm) conditions.
The limitations of R-values in evaluating radiant barriers
Unlike bulk insulators, radiant barriers resist conducted heat poorly. Materials such as reflective foil have a high thermal conductivity and would function poorly as a conductive insulator. Radiant barriers retard heat transfer by two means: by reflecting radiant energy away from its irradiated surface and by reducing the emission of radiation from its opposite side.
The question of how to quantify performance of other systems such as radiant barriers has resulted in controversy and confusion in the building industry with the use of R-values or 'equivalent R-values' for products which have entirely different systems of inhibiting heat transfer. (In the U.S., the federal government's R-value rule establishes a legal definition for the R-value of a building material; the term 'equivalent R-value' has no legal definition and is therefore meaningless.) According to current standards, R-values are most reliably stated for bulk insulation materials. All of the products quoted at the end are examples of these.
Calculating the performance of radiant barriers is more complex. With a good radiant barrier in place, most heat flow is by convection, which depends on many factors other than the radiant barrier itself. Although radiant barriers have high reflectivity (and low emissivity) over a range of electromagnetic spectra (including visible and UV light), their thermal advantages are mainly related to their emissivity in the infra-red range. Emissivity values are the appropriate metric for radiant barriers. Their effectiveness when employed to resist heat gain in limited applications is established,
even though R-value does not adequately describe them.
Deterioration
Insulation aging
While research is lacking on the long-term degradation of R-value in insulation, recent research indicates that the R-values of products may deteriorate over time. For instance, the compaction of loose-fill cellulose creates voids that reduce overall performance; this may be avoided by densely packing at initial installation. Some types of foam insulation, such as polyurethane and polyisocyanurate are blown into form with heavy gases such as chlorofluorocarbons (CFC) or hydrochlorofluorocarbons (HFCs). However, over time these gases diffuse out of the foam and are replaced by air, thus reducing the effective R-value of the product. There are other foams which do not change significantly with aging because they are blown with water or are open-cell and contain no trapped CFCs or HFCs (e.g., half-pound low-density foams). On certain brands, twenty-year tests have shown no shrinkage or reduction in insulating value.
This has led to controversy as how to rate the insulation of these products. Many manufacturers will rate the R-value at the time of manufacture; critics argue that a more fair assessment would be its settled value. The foam industry adopted the long-term thermal resistance (LTTR) method, which rates the R-value based on a 15-year weighted average. However, the LTTR effectively provides only an eight-year aged R-value, short in the scale of a building that may have a lifespan of 50 to 100 years.
Research has been conducted by the U.S. Army Engineer Research and Development Center on the long-term degradation of insulating materials. Values on the degradation were obtained from short-term laboratory testing on materials exposed to various temperature and humidity conditions. Results indicate that moisture absorption and loss of blowing agent (in closed-cell spray polyurethane foam) were major causes of R-value loss. Fiberglass and extruded polystyrene retained over 97% of their initial R-values while, aerogels and closed-cell polyurethane saw a reduction of 15% and 27.5%, respectively. Results suggest an exponential decay law over time applies to R-values for closed-cell polyurethanes and aerogel blankets.
Infiltration
Correct attention to air sealing measures and consideration of vapor transfer mechanisms are important for the optimal function of bulk insulators. Air infiltration can allow convective heat transfer or condensation formation, both of which may degrade the performance of an insulation.
One of the primary values of spray-foam insulation is its ability to create an airtight (and in some cases, watertight) seal directly against the substrate to reduce the undesirable effects of air leakage. Other construction technologies are also used to reduce or eliminate infiltration such as air sealing techniques.
R-value in-situ measurements
The deterioration of R-values is especially a problem when defining the energy efficiency of an existing building. Especially in older or historic buildings the R-values defined before construction might be very different from the actual values. This greatly affects energy efficiency analysis. To obtain reliable data, R-values are therefore often determined via U-value measurements at the specific location (in situ). There are several potential methods to this, each with their specific trade-offs: thermography, multiple temperature measurements, and the heat flux method.
Thermography
Thermography is applied in the building sector to assess the quality of the thermal insulation of a room or building. By means of a thermographic camera thermal bridges and inhomogeneous insulation parts can be identified. However, it does not produce any quantitative data. This method can only be used to approximate the U-value or the inverse R-value.
Multiple temperature measurements
This approach is based on three or more temperature measurements inside and outside of a building element. By synchronizing these measurements and making some basic assumptions, it is possible to calculate the heat flux indirectly, and thus deriving the U-value of a building element. The following requirements have to be fulfilled for reliable results:
Difference between inside and outside temperature, ideal > 15 K
Constant conditions
No solar radiation
No radiation heat nearby measurements
Heat flux method
The R-value of a building element can be determined by using a heat flux sensor in combination with two temperature sensors. By measuring the heat that is flowing through a building element and combining this with the inside and outside temperature, it is possible to define the R-value precisely. A measurement that lasts at least 72 hours with a temperature difference of at least 5 °C is required for a reliable result according to ISO 9869 norms, but shorter measurement durations give a reliable indication of the R-value as well. The progress of the measurement can be viewed on the laptop via corresponding software and obtained data can be used for further calculations. Measuring devices for such heat flux measurements are offered by companies like FluxTeq, Ahlborn, greenTEG and Hukseflux.
Placing the heat flux sensor on either the inside or outside surface of the building element allows one to determine the heat flux through the heat flux sensor as a representative value for the heat flux through the building element. The heat flux through the heat flux sensor is the rate of heat flow through the heat flux sensor divided by the surface area of the heat flux sensor. Placing the temperature sensors on the inside and outside surfaces of the building element allows one to determine the inside surface temperature, outside surface temperature, and the temperature difference between them. In some cases the heat flux sensor itself can serve as one of the temperature sensors. The R-value for the building element is the temperature difference between the two temperature sensors divided by the heat flux through the heat flux sensor. The mathematical formula is:
where:
is the R-value (K⋅W−1⋅m2),
is the heat flux (W⋅m−2),
is the surface area of the heat flux sensor (m2),
is the rate of heat flow (W),
is the inside surface temperature (K),
is the outside surface temperature (K), and
is the temperature difference (K) between the inside and outside surfaces.
The U-value can be calculated as well by taking the reciprocal of the R-value. That is,
where is the U-value (W⋅m−2⋅K−1).
The derived R-value and U-value may be accurate to the extent that the heat flux through the heat flux sensor equals the heat flux through the building element. Recording all of the available data allows one to study the dependence of the R-value and U-value on factors like the inside temperature, outside temperature, or position of the heat flux sensor. To the extent that all heat transfer processes (conduction, convection, and radiation) contribute to the measurements, the derived R-value represents an apparent R-value.
Sample values
Vacuum insulated panels have the highest R-value, approximately R-45 (in U.S. units) per inch; aerogel has the next highest R-value (about R-10 to R-30 per inch), followed by polyurethane (PUR) and phenolic foam insulations with R-7 per inch. They are followed closely by polyisocyanurate (PIR) at R-5.8, graphite impregnated expanded polystyrene at R-5, and expanded polystyrene (EPS) at R-4 per inch. Loose cellulose, fibreglass (both blown and in batts), and rock wool (both blown and in batts) all possess an R-value of roughly R-2.5 to R-4 per inch.
Straw bales perform at about R-2.38 to 2.68 per inch, depending on orientation of the bales. However, typical straw bale houses have very thick walls and thus are well insulated. Snow is roughly R-1 per inch. Brick has a very poor insulating ability at a mere R-0.2 per inch; however it does have a relatively good thermal mass.
Note that the above examples all use the U.S. (non-SI) definition for R-value.
Typical R-values
Typical R-values for surfaces
Non-reflective surface R-values for air films
When determining the overall thermal resistance of a building assembly such as a wall or roof, the insulating effect of the surface air film is added to the thermal resistance of the other materials.
In practice the above surface values are used for floors, ceilings, and walls in a building, but are not accurate for enclosed air cavities, such as between panes of glass. The effective thermal resistance of an enclosed air cavity is strongly influenced by radiative heat transfer and distance between the two surfaces. See insulated glazing for a comparison of R-values for windows, with some effective R-values that include an air cavity.
Radiant barriers
R-value rule in the U.S.
The Federal Trade Commission (FTC) governs claims about R-values to protect consumers against deceptive and misleading advertising claims. It issued the R-value rule.
The primary purpose of the rule is to ensure that the home insulation marketplace provides this essential pre-purchase information to the consumer. The information gives consumers an opportunity to compare relative insulating efficiencies, to select the product with the greatest efficiency and potential for energy savings, to make a cost-effective purchase and to consider the main variables limiting insulation effectiveness and realization of claimed energy savings.
The rule mandates that specific R-value information for home insulation products be disclosed in certain ads and at the point of sale. The purpose of the R-value disclosure requirement for advertising is to prevent consumers from being misled by certain claims which have a bearing on insulating value. At the point of transaction, some consumers will be able to get the requisite R-value information from the label on the insulation package. However, since the evidence shows that packages are often unavailable for inspection prior to purchase, no labeled information would be available to consumers in many instances. As a result, the Rule requires that a fact sheet be available to consumers for inspection before they make their purchase.
Thickness
The R-value Rule specifies:
See also
Building insulation
Building insulation materials
Condensation
Cool roofs
Heat transfer
Passivhaus
Passive solar design
Sol-air temperature
Superinsulation
Thermal bridge
Thermal comfort
Thermal conductivity
Thermal mass
Thermal transmittance
Tog (unit)
References
External links
Table of Insulation R-values at InspectApedia includes original source citations
Information on the calculations, meanings, and inter-relationships of related heat transfer and resistance terms
American building material R-value table
Working with R-values
Insulation R-value Explained
Understanding R-value
Building engineering
Insulators
Thermal protection
Heat transfer
Customary units of measurement in the United States
de:Wärmedurchgangskoeffizient | R-value (insulation) | [
"Physics",
"Chemistry",
"Engineering"
] | 7,974 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Building engineering",
"Civil engineering",
"Thermodynamics",
"Architecture"
] |
254,293 | https://en.wikipedia.org/wiki/Ultramicroscope | An ultramicroscope is a microscope with a system that lights the object in a way that allows viewing of tiny particles via light scattering, and not light reflection or absorption. When the diameter of a particle is below or near the wavelength of visible light (around 500 nanometers), the particle cannot be seen in a light microscope with the usual methods of illumination. The ultra- in ultramicroscope refers to the ability to see objects whose diameter is shorter than the wavelength of visible light, on the model of the ultra- in ultraviolet.
Synopsis
In the system, the particles to be observed are dispersed in a liquid or gas colloid (or less often in a coarser suspension). The colloid is placed in a light-absorbing, dark enclosure, and illuminated with a convergent beam of intense light entering from one side. Light hitting the colloid particles will be scattered. In discussions about light scattering, the converging beam is called a "Tyndall cone". The scene is viewed through an ordinary microscope placed at right angles to the direction of the lightbeam. Under the microscope, the individual particles will appear as small fuzzy spots of light moving irregularly. The spots are inherently fuzzy because light scattering produces fuzzier images than light reflection. The particles are in Brownian motion in most kinds of liquid and gas colloids, which causes the movement of the spots. The ultramicroscope system can also be used to observe tiny nontransparent particles dispersed in a transparent solid or gel.
Ultramicroscopes have been used for general observation of aerosols and colloids, in studying Brownian motion, in observing ionization tracks in cloud chambers, and in studying biological ultrastructure.
History
In 1902, the ultramicroscope was developed by Richard Adolf Zsigmondy (1865–1929) and Henry Siedentopf (1872–1940), working for Carl Zeiss AG. Applying bright sunlight for illumination they were able to determine the size of 4 nm small nanoparticles in cranberry glass. Zsigmondy further improved the ultramicroscope and presented the immersion ultramicroscope in 1912, allowing the observation of suspended nanoparticles in defined fluidic volumes. In 1925, he was awarded the Nobel Prize in Chemistry for his research on colloids and the ultramicroscope.
Later the development of electron microscopes provided additional ways to see objects too small for light microscopy.
See also
Dark-field microscopy, a different technique that leverages light scattering against a dark background
Light sheet fluorescence microscopy
References
Microscopes
Optical microscopy techniques
Scattering, absorption and radiative transfer (optics)
Hungarian inventions | Ultramicroscope | [
"Chemistry",
"Technology",
"Engineering"
] | 550 | [
" absorption and radiative transfer (optics)",
"Measuring instruments",
"Scattering",
"Microscopes",
"Microscopy"
] |
254,510 | https://en.wikipedia.org/wiki/Galvanic%20cell | A galvanic cell or voltaic cell, named after the scientists Luigi Galvani and Alessandro Volta, respectively, is an electrochemical cell in which an electric current is generated from spontaneous oxidation–reduction reactions. An example of a galvanic cell consists of two different metals, each immersed in separate beakers containing their respective metal ions in solution that are connected by a salt bridge or separated by a porous membrane.
Volta was the inventor of the voltaic pile, the first electrical battery. Common usage of the word battery has evolved to include a single Galvanic cell, but the first batteries had many Galvanic cells.
History
In 1780, Luigi Galvani discovered that when two different metals (e.g., copper and zinc) are in contact and then both are touched at the same time to two different parts of a muscle of a frog leg, to close the circuit, the frog's leg contracts. He called this "animal electricity". The frog's leg, as well as being a detector of electrical current, was also the electrolyte (to use the language of modern chemistry).
A year after Galvani published his work (1790), Alessandro Volta showed that the frog was not necessary, using instead a force-based detector and brine-soaked paper (as electrolyte). (Earlier Volta had established the law of capacitance with force-based detectors). In 1799 Volta invented the voltaic pile, which is a stack of galvanic cells each consisting of a metal disk, an electrolyte layer, and a disk of a different metal. He built it entirely out of non-biological material to challenge Galvani's (and the later experimenter Leopoldo Nobili)'s animal electricity theory in favor of his own metal-metal contact electricity theory. Carlo Matteucci in his turn constructed a battery entirely out of biological material in answer to Volta. Volta's contact electricity view characterized each electrode with a number that we would now call the work function of the electrode. This view ignored the chemical reactions at the electrode-electrolyte interfaces, which include H2 formation on the more noble metal in Volta's pile.
Although Volta did not understand the operation of the battery or the galvanic cell, these discoveries paved the way for electrical batteries; Volta's cell was named an IEEE Milestone in 1999.
Some forty years later, Faraday (see Faraday's laws of electrolysis) showed that the galvanic cell—now often called a voltaic cell—was chemical in nature. Faraday introduced new terminology to the language of chemistry: electrode (cathode and anode), electrolyte, and ion (cation and anion). Thus Galvani incorrectly thought the source of electricity (or source of electromotive force (emf), or seat of emf) was in the animal, Volta incorrectly thought it was in the physical properties of the isolated electrodes, but Faraday correctly identified the source of emf as the chemical reactions at the two electrode-electrolyte interfaces. The authoritative work on the intellectual history of the voltaic cell remains that by Ostwald.
It was suggested by Wilhelm König in 1940 that the object known as the Baghdad battery might represent galvanic cell technology from ancient Parthia. Replicas filled with citric acid or grape juice have been shown to produce a voltage. However, it is far from certain that this was its purpose—other scholars have pointed out that it is very similar to vessels known to have been used for storing parchment scrolls.
Principles
Galvanic cells are extensions of spontaneous redox reactions, but have been merely designed to harness the energy produced from said reaction. For example, when one immerses a strip of zinc metal (Zn) in an aqueous solution of copper sulfate (CuSO), dark-colored solid deposits will collect on the surface of the zinc metal and the blue color characteristic of the Cu ion disappears from the solution. The depositions on the surface of the zinc metal consist of copper metal, and the solution now contains zinc ions. This reaction is represented by
Zn   +   Cu     Zn   +   Cu
In this redox reaction, Zn is oxidized to Zn and Cu is reduced to Cu. When electrons are transferred directly from Zn to Cu, the enthalpy of reaction is lost to the surroundings as heat. However, the same reaction can be carried out in a galvanic cell, allowing some of the chemical energy released to be converted into electrical energy. In its simplest form, a half-cell consists of a solid metal (called an electrode) that is submerged in a solution; the solution contains cations (+) of the electrode metal and anions (−) to balance the charge of the cations. The full cell consists of two half-cells, usually connected by a semi-permeable membrane or by a salt bridge that prevents the ions of the more noble metal from plating out at the other electrode.
A specific example is the Daniell cell (see figure), with a zinc (Zn) half-cell containing a solution of ZnSO4 (zinc sulfate) and a copper (Cu) half-cell containing a solution of CuSO4 (copper sulfate). A salt bridge is used here to complete the electric circuit.
If an external electrical conductor connects the copper and zinc electrodes, zinc from the zinc electrode dissolves into the solution as Zn++ ions (oxidation), releasing electrons that enter the external conductor. To compensate for the increased zinc ion concentration, via the salt bridge zinc ions (cations) leave and sulfate ions (anions) enter the zinc half-cell. In the copper half-cell, the copper ions plate onto the copper electrode (reduction), taking up electrons that leave the external conductor. Since the Cu++ ions (cations) plate onto the copper electrode, the latter is called the cathode. Correspondingly the zinc electrode is the anode. The electrochemical reaction is
Zn_\mathsf{(s)}\ +\ Cu^{++}_\mathsf{(aq)}\ ->\ Zn^{++}_\mathsf{(aq)}\ +\ Cu_\mathsf{(s)}
This is the same reaction as given in the previous example. In addition, electrons flow through the external conductor, which is the primary application of the galvanic cell.
As discussed under cell voltage, the electromotive force of the cell is the difference of the half-cell potentials, a measure of the relative ease of dissolution of the two electrodes into the electrolyte. The emf depends on both the electrodes and on the electrolyte, an indication that the emf is chemical in nature.
Half reactions and conventions
A half-cell contains a metal in two oxidation states. Inside an isolated half-cell, there is an oxidation-reduction (redox) reaction that is in chemical equilibrium, a condition written symbolically as follows (here, "M" represents a metal cation, an atom that has a charge imbalance due to the loss of electrons):
M + e− M
A galvanic cell consists of two half-cells, such that the electrode of one half-cell is composed of metal A, and the electrode of the other half-cell is composed of metal B; the redox reactions for the two separate half-cells are thus:
A + + e−     A
B +   +   e−     B
The overall balanced reaction is:
A   +   B +     B   +   A +
In other words, the metal atoms of one half-cell are oxidized while the metal cations of the other half-cell are reduced. By separating the metals in two half-cells, their reaction can be controlled in a way that forces transfer of electrons through the external circuit where they can do useful work.
The electrodes are connected with a metal wire in order to conduct the electrons that participate in the reaction.
In one half-cell, dissolved metal B cations combine with the free electrons that are available at the interface between the solution and the metal B electrode; these cations are thereby neutralized, causing them to precipitate from solution as deposits on the metal B electrode, a process known as plating.
This reduction reaction causes the free electrons throughout the metal B electrode, the wire, and the metal A electrode to be pulled into the metal B electrode. Consequently, electrons are wrestled away from some of the atoms of the metal A electrode, as though the metal B cations were reacting directly with them; those metal A atoms become cations that dissolve into the surrounding solution.
As this reaction continues, the half-cell with the metal A electrode develops a positively charged solution (because the metal A cations dissolve into it), while the other half-cell develops a negatively charged solution (because the metal B cations precipitate out of it, leaving behind the anions); unabated, this imbalance in charge would stop the reaction. The solutions of the half-cells are connected by a salt bridge or a porous plate that allows ions to pass from one solution to the other, which balances the charges of the solutions and allows the reaction to continue.
By definition:
The anode is the electrode where oxidation (loss of electrons) takes place (metal A electrode); in a galvanic cell, it is the negative electrode, because when oxidation occurs, electrons are left behind on the electrode. These electrons then flow through the external circuit to the cathode (positive electrode) (while in electrolysis, an electric current drives electron flow in the opposite direction and the anode is the positive electrode).
The cathode is the electrode where reduction (gain of electrons) takes place (metal B electrode); in a galvanic cell, it is the positive electrode, as ions get reduced by taking up electrons from the electrode and plate out (while in electrolysis, the cathode is the negative terminal and attracts positive ions from the solution). In both cases, the statement 'the cathode attracts cations' is true.
By their nature, galvanic cells produce direct current.
The Weston cell has an anode composed of cadmium mercury amalgam, and a cathode composed of pure mercury. The electrolyte is a (saturated) solution of cadmium sulfate. The depolarizer is a paste of mercurous sulfate. When the electrolyte solution is saturated, the voltage of the cell is very reproducible; hence, in 1911, it was adopted as an international standard for voltage.
In the strictest sense, a battery is a set of two or more galvanic cells that are connected in series to form a single source of voltage.
For instance, a typical 12 V lead–acid battery has six galvanic cells connected in series, with the anodes composed of lead and cathodes composed of lead dioxide, both immersed in sulfuric acid.
Large central office battery rooms – in a telephone exchange to provide power for subscribers' land-line telephones, for instance – may have many cells, connected both in series and parallel: Individual cells are connected in series as a battery of cells with some standard voltage (), and banks of such serial batteries, themselves connected in parallel, to provide adequate amperage to supply a typical peak demand for telephone connections.
Cell voltage
The voltage (electromotive force ) produced by a galvanic cell can be estimated from the standard Gibbs free energy change in the electrochemical reaction according to:
where is the number of electrons transferred in the balanced half reactions, and is Faraday's constant. However, it can be determined more conveniently by the use of a standard potential table for the two half cells involved. The first step is to identify the two metals and their ions reacting in the cell. Then one looks up the standard electrode potential,  o, in volts, for each of the two half reactions. The standard potential of the cell is equal to the more positive  o value minus the more negative  o value.
For example, in the figure above the solutions are CuSO4 and ZnSO4. Each solution has a corresponding metal strip in it, and a salt bridge or porous disk connecting the two solutions and allowing ions to flow freely between the copper and zinc solutions. To calculate the standard potential one looks up copper and zinc's half reactions and finds:
Cu++   + 2 Cu :  o = +0.34 V
Zn++   + 2 Zn :  o = −0.76 V
Thus the overall reaction is:
Cu++ + Zn Cu + Zn++
The standard potential for the reaction is then The polarity of the cell is determined as follows. Zinc metal is more strongly reducing than copper metal because the standard (reduction) potential for zinc is more negative than that of copper. Thus, zinc metal will lose electrons to copper ions and develop a positive electrical charge. The equilibrium constant, , for the cell is given by:
where
is the Faraday constant,
is the gas constant, and
is the absolute temperature in Kelvins.
For the Daniell cell Thus, at equilibrium, a few electrons are transferred, enough to cause the electrodes to be charged.
Actual half-cell potentials must be calculated by using the Nernst equation as the solutes are unlikely to be in their standard states:
where is the reaction quotient. When the charges of the ions in the reaction are equal, this simplifies to:
where M is the activity of the metal ion in solution. In practice concentration in is used in place of activity. The metal electrode is in its standard state so by definition has unit activity. The potential of the whole cell is obtained as the difference between the potentials for the two half-cells, so it depends on the concentrations of both dissolved metal ions. If the concentrations are the same the Nernst equation is not needed, and under the conditions assumed here.
The value of is so at 25 °C (298.15 K)   the half-cell potential will change by only if the concentration of a metal ion is increased or decreased by a
These calculations are based on the assumption that all chemical reactions are in equilibrium. When a current flows in the circuit, equilibrium conditions are not achieved and the cell voltage will usually be reduced by various mechanisms, such as the development of overpotentials. Also, since chemical reactions occur when the cell is producing power, the electrolyte concentrations change and the cell voltage is reduced. A consequence of the temperature dependency of standard potentials is that the voltage produced by a galvanic cell is also temperature dependent.
Galvanic corrosion
Galvanic corrosion is the electrochemical erosion of metals. Corrosion occurs when two dissimilar metals are in contact with each other in the presence of an electrolyte, such as salt water. This forms a galvanic cell, with hydrogen gas forming on the more noble (less active) metal. The resulting electrochemical potential then develops an electric current that electrolytically dissolves the less noble material. A concentration cell can be formed if the same metal is exposed to two different concentrations of electrolyte.
Types
Concentration cell
Electrolytic cell
Electrochemical cell
Lemon battery
Thermogalvanic cell
See also
Bioelectrochemical reactor
Resting potential
Bio-nano generator
Cell notation
Desulfation
Electrochemical engineering
Electrode potential
Electrohydrogenesis
Electrosynthesis
Enzymatic biofuel cell
Galvanic series
Isotope electrochemistry
List of battery types
Sacrificial anode
References
External links
How to build a galvanic cell battery from MiniScience.com
Galvanic Cell, an animation
Interactive animation of Galvanic Cell. Chemical Education Research Group, Iowa State University.
Electron transfer reactions and redox potentials in GALVANIc cells - what happens to the ions at the phase boundary (NERNST, FARADAY) (Video by SciFox on TIB AV-Portal)
Electrochemical concepts
Corrosion | Galvanic cell | [
"Chemistry",
"Materials_science"
] | 3,407 | [
"Metallurgy",
"Corrosion",
"Electrochemical concepts",
"Electrochemistry",
"Materials degradation"
] |
254,533 | https://en.wikipedia.org/wiki/Cathodic%20protection | Cathodic protection (CP; ) is a technique used to control the corrosion of a metal surface by making it the cathode of an electrochemical cell. A simple method of protection connects the metal to be protected to a more easily corroded "sacrificial metal" to act as the anode. The sacrificial metal then corrodes instead of the protected metal. For structures such as long pipelines, where passive galvanic cathodic protection is not adequate, an external DC electrical power source is used to provide sufficient current.
Cathodic protection systems protect a wide range of metallic structures in various environments. Common applications are: steel water or fuel pipelines and steel storage tanks such as home water heaters; steel pier piles; ship and boat hulls; offshore oil platforms and onshore oil well casings; offshore wind farm foundations and metal reinforcement bars in concrete buildings and structures. Another common application is in galvanized steel, in which a sacrificial coating of zinc on steel parts protects them from rust.
Cathodic protection can, in some cases, prevent stress corrosion cracking.
History
Cathodic protection was first described by Sir Humphry Davy in a series of papers presented to the Royal Society in London in 1824. The first application was to in 1824. Sacrificial anodes made from iron attached to the copper sheath of the hull below the waterline dramatically reduced the corrosion rate of the copper. However, a side effect of cathodic protection was the increase in marine growth. Usually, copper when corroding releases copper ions which have an anti-fouling effect. Since excess marine growth affected the performance of the ship, the Royal Navy decided that it was better to allow the copper to corrode and have the benefit of reduced marine growth, so cathodic protection was not used further.
Davy was assisted in his experiments by his pupil Michael Faraday, who continued his research after Davy's death. In 1834, Faraday discovered the quantitative connection between corrosion weight loss and electric current and thus laid the foundation for the future application of cathodic protection.
Thomas Edison experimented with impressed current cathodic protection on ships in 1890, but was unsuccessful due to the lack of a suitable current source and anode materials. It would be 100 years after Davy's experiment before cathodic protection was used widely on oil pipelines in the United States cathodic protection was applied to steel gas pipelines beginning in 1928 and more widely in the 1930s.
Types
Galvanic
In the application of passive cathodic protection, a galvanic anode, a piece of a more electrochemically "active" metal (more negative electrode potential), is attached to the vulnerable metal surface where it is exposed to an electrolyte. Galvanic anodes are selected because they have a more "active" voltage than the metal of the target structure (typically steel).
Concrete has a pH around 13. In this environment the steel reinforcement has a passive protective layer and remains largely stable. Galvanic systems are "constant potential" systems that aim to restore the concrete's natural protective environment by providing a high initial current to restore passivity. It then reverts to a lower sacrificial current, while harmful negative chloride ions migrate away from the steel and towards the positive anode. The anodes remain reactive through their lifetime (10–20 years typically), increasing current when the resistivity decreases due to corrosion hazards such as rainfall, temperature increases, or flooding. The reactive nature of these anodes makes them an efficient choice.
Unlike impressed current cathodic protection (ICCP) systems, steel constant polarization is not the goal, rather the restoration of the environment. Polarization of the target structure is caused by the electron flow from the anode to the cathode, so the two metals must have a good electrically conductive contact. The driving force for the cathodic protection current is the difference in electrode potential between the anode and the cathode. During the initial phase of high current, the potential of the steel surface is polarized (pushed) more negative protecting the steel which hydroxide ion generation at the steel surface and ionic migration restore the concrete environment.
Over time the galvanic anode continues to corrode, consuming the anode material until eventually it must be replaced.
Galvanic or sacrificial anodes are made in various shapes and sizes using alloys of zinc, magnesium, and aluminum. ASTM International publishes standards on the composition and manufacturing of galvanic anodes.
In order for galvanic cathodic protection to work, the anode must possess a lower (that is, more negative) electrode potential than that of the cathode (the target structure to be protected). The table below shows a simplified galvanic series which is used to select the anode metal. The anode must be chosen from a material that is lower on the list than the material to be protected.
Impressed current cathodic protection (ICCP)
In some cases, impressed current cathodic protection (ICCP) systems are used. These consist of anodes connected to a DC power source, often a transformer-rectifier connected to AC power. In the absence of an AC supply, alternative power sources may be used, such as solar panels, wind power or gas powered thermoelectric generators.
Anodes for ICCP systems are available in a variety of shapes and sizes. Common anodes are tubular and solid rod shapes or continuous ribbons of various materials. These include high silicon, cast iron, graphite, mixed metal oxide (MMO), platinum and niobium coated wire and other materials.
For pipelines, anodes are arranged in groundbeds either distributed or in a deep vertical hole depending on several design and field condition factors including current distribution requirements.
Cathodic protection transformer-rectifier units are often custom manufactured and equipped with a variety of features, including remote monitoring and control, integral current interrupters and various type of electrical enclosures. The output DC negative terminal is connected to the structure to be protected by the cathodic protection system. The rectifier output DC positive cable is connected to the anodes. The AC power cable is connected to the rectifier input terminals.
The output of the ICCP system should be optimized to provide enough current to provide protection to the target structure. Some cathodic protection transformer-rectifier units are designed with taps on the transformer windings and jumper terminals to select the voltage output of the ICCP system. Cathodic protection transformer-rectifier units for water tanks and used in other applications are made with solid state circuits to automatically adjust the operating voltage to maintain the optimum current output or structure-to-electrolyte potential. Analog or digital meters are often installed to show the operating voltages (DC and sometimes AC) and current output. For shore structures and other large complex target structures, ICCP systems are often designed with multiple independent zones of anodes with separate cathodic protection transformer-rectifier circuits.
Hybrid systems
Hybrid systems use a combination of the aforementioned systems to achieve some of the benefits of both, utilizing the restorative capabilities of ICCP systems but maintaining the reactive, lower cost, and easier-to-maintain nature of a galvanic anode.
The system is made up of wired galvanic anodes in arrays typically apart, which are then initially powered for a short period to restore the concrete and to power ionic migration. The power supply is then taken away and the anodes are simply attached to the steel as a galvanic system. More powered phases can be administered if needed. Like galvanic systems, corrosion rate monitoring from polarization tests and half-cell potential mapping can be used to measure corrosion. Polarization is not the goal for the life of the system.
Applications
Hot water tank / Water heater
This technology is also used to protect water heaters. Indeed, the electrons sent by the imposed current anode (composed of titanium and covered with MMO) prevents the inside of the tank from rusting.
In order to be recognized as effective, these anodes must comply with certain standards: A cathodic protection system is considered efficient when its potential reaches or exceeds the limits established by the cathodic protection criteria. The cathode protection criteria used come from the standard NACE SP0388-2007 (formerly RP0388-2001) of the NACE National Association of Corrosion Engineers.
Pipelines
Hazardous product pipelines are routinely protected by a coating supplemented with cathodic protection. An impressed current cathodic protection system (ICCP) for a pipeline consists of a DC power source, often an AC powered transformer rectifier and an anode, or array of anodes buried in the ground (the anode groundbed).
The DC power source would typically have a DC output of up to 50 amperes and 50 volts, but this depends on several factors, such as the size of the pipeline and coating quality. The positive DC output terminal would be connected via cables to the anode array, while another cable would connect the negative terminal of the rectifier to the pipeline, preferably through junction boxes to allow measurements to be taken.
Anodes can be installed in a groundbed consisting of a vertical hole backfilled with conductive coke (a material that improves the performance and life of the anodes) or laid in a prepared trench, surrounded by conductive coke and backfilled. The choice of groundbed type and size depends on the application, location and soil resistivity.
The DC cathodic protection current is then adjusted to the optimum level after conducting various tests including measurements of pipe-to-soil potentials or electrode potential.
It is sometimes more economically viable to protect a pipeline using galvanic (sacrificial) anodes. This is often the case on smaller diameter pipelines of limited length. Galvanic anodes rely on the galvanic series potentials of the metals to drive cathodic protection current from the anode to the structure being protected.
Water pipelines of various pipe materials are also provided with cathodic protection where owners determine the cost is reasonable for the expected pipeline service life extension attributed to the application of cathodic protection.
Ships and boats
Cathodic protection on ships is often implemented by galvanic anodes attached to the hull and ICCP for larger vessels. Since ships are regularly removed from the water for inspections and maintenance, it is a simple task to replace the galvanic anodes.
Galvanic anodes are generally shaped to reduced drag in the water and fitted flush to the hull to also try to minimize drag.
Smaller vessels, with non-metallic hulls, such as yachts, are equipped with galvanic anodes to protect areas such as outboard motors. As with all galvanic cathodic protection, this application relies on a solid electrical connection between the anode and the item to be protected.
For ICCP on ships, the anodes are usually constructed of a relatively inert material such as platinized titanium. A DC power supply is provided within the ship and the anodes mounted on the outside of the hull. The anode cables are introduced into the ship via a compression seal fitting and routed to the DC power source. The negative cable from the power supply is simply attached to the hull to complete the circuit. Ship ICCP anodes are flush-mounted, minimizing the effects of drag on the ship, and located a minimum 5 ft below the light load line in an area to avoid mechanical damage. The current density required for protection is a function of velocity and considered when selecting the current capacity and location of anode placement on the hull.
Some ships may require specialist treatment, for example aluminum hulls with steel fixtures will create an electrochemical cell where the aluminum hull can act as a galvanic anode and corrosion is enhanced. In cases like this, aluminum or zinc galvanic anodes can be used to offset the potential difference between the aluminum hull and the steel fixture. If the steel fixtures are large, several galvanic anodes may be required, or even a small ICCP system.
Marine
Marine cathodic protection covers many areas, jetties, harbors, offshore structures. The variety of different types of structure leads to a variety of systems to provide protection. Galvanic anodes are favored, but ICCP can also often be used. Because of the wide variety of structure geometry, composition, and architecture, specialized firms are often required to engineer structure-specific cathodic protection systems. Sometimes marine structures require retroactive modification to be effectively protected
Steel in concrete
The application to concrete reinforcement is slightly different in that the anodes and reference electrodes are usually embedded in the concrete at the time of construction when the concrete is being poured. The usual technique for concrete buildings, bridges and similar structures is to use ICCP, but there are systems available that use the principle of galvanic cathodic protection as well, although in the UK at least, the use of galvanic anodes for atmospherically exposed reinforced concrete structures is considered experimental.
For ICCP, the principle is the same as any other ICCP system. However, in a typical atmospherically exposed concrete structure such as a bridge, there will be many more anodes distributed through the structure as opposed to an array of anodes as used on a pipeline. This makes for a more complicated system and usually an automatically controlled DC power source is used, possibly with an option for remote monitoring and operation. For buried or submerged structures, the treatment is similar to that of any other buried or submerged structure.
Galvanic systems offer the advantage of being easier to retrofit and do not need any control systems as ICCP does.
For pipelines constructed from pre-stressed concrete cylinder pipe (PCCP), the techniques used for cathodic protection are generally as for steel pipelines except that the applied potential must be limited to prevent damage to the prestressing wire.
The steel wire in a PCCP pipeline is stressed to the point that any corrosion of the wire can result in failure. An additional problem is that any excessive hydrogen ions as a result of an excessively negative potential can cause hydrogen embrittlement of the wire, also resulting in failure. The failure of too many wires will result in catastrophic failure of the PCCP. To implement ICCP therefore requires very careful control to ensure satisfactory protection. A simpler option is to use galvanic anodes, which are self-limiting and need no control.
Internal cathodic protection
Vessels, pipelines and tanks (including ballast tanks) which are used to store or transport liquids can also be protected from corrosion on their internal surfaces by the use of cathodic protection. ICCP and galvanic systems can be used. A common application of internal cathodic protection is water storage tanks and power plant shell and tube heat exchangers.
Galvanized steel
Galvanizing generally refers to hot-dip galvanizing which is a way of coating steel with a layer of metallic zinc or tin. Lead or antimony are often added to the molten zinc bath, and also other metals have been studied. Galvanized coatings are quite durable in most environments because they combine the barrier properties of a coating with some of the benefits of cathodic protection. If the zinc coating is scratched or otherwise locally damaged and steel is exposed, the surrounding areas of zinc coating form a galvanic cell with the exposed steel and protect it from corrosion. This is a form of localized cathodic protection - the zinc acts as a sacrificial anode.
Galvanizing, while using the electrochemical principle of cathodic protection, is not actually cathodic but sacrificial protection. In the case of galvanizing, only areas very close to the zinc are protected. Hence, a larger area of bare steel would only be protected around the edges.
Automobiles
Several companies market electronic devices claiming to mitigate corrosion for automobiles and trucks. Corrosion control professionals find they do not work. There is no peer reviewed scientific testing and validation supporting the use of the devices. In 1996 the FTC ordered David McCready, a person that sold devices claiming to protect cars from corrosion, to pay restitution and banned the names "Rust Buster" and "Rust Evader."
Under section 74.01(1) (b) of the Competition Act Canada, no performance claims about a product or its effectiveness can be done unless it can be proven that they are based on adequate and proper tests. The Competition Bureau Canada proceeded to investigate several companies selling electronic corrosion devices in Canada. Some were forced to withdraw their product from the market as they could not support their claims scientifically. However, at least two companies under investigation were able to satisfy the Competition Bureau that their claims of protecting vehicles against corrosion were based on adequate and proper testing under section 74.01(1) (b) of the Competition Act.
In response to the Competition Bureau's investigation into its distribution of the Impressed Current Cathodic Protection module in the Canadian market, the Auto Saver Systems, Inc. submitted its module to laboratory testing in an ISO-certified lab. The test methodology consisted of the ASTM B117 Standard Practice for Operating Salt Spray (Fog) Apparatus which a corrosion expert, retained by the Competition Bureau, adapted in order to replicate the operational environment of an automobile. The test differed from the ASTM B117 insofar as the galvanized automotive steel panels were not entirely exposed to the salt spray. Instead, only the bare steel exposed by a 12-inch scratch at one end of the panel was exposed to the salt spray while the remainder of the panel was kept in a completely dry condition.
The test results, as reported to and validated by the Competition Bureau, demonstrated that the Auto Saver module being tested was able to cause a shift, in the negative direction, in the electrochemical corrosion potential of the iron in the steel panels, proving the attainment of cathodic protection and the resulting slowdown of the oxidation process of the iron (rust formation). A visual inspection of both galvanized and non-galvanized test panels showed a significant reduction in the appearance of rust compared to the control panels (not connected to the protection module), consistent with the observed cathodic shift in the electrochemical potential measurements obtained on the panels during the tests.
A second company, Canadian Auto Preservation Inc., was also able to satisfy the Competition Bureau proving that the testing of its Electromagnetically Induced Corrosion Control Technology (EICCT) was adequate and proper. The testing of that module, which relied on a methodology very similar to that used by Auto Saver, also produced a shift, in the negative direction, in the electrochemical corrosion potential of the iron galvanized automotive steel panels, consistent with the attainment of cathodic protection. A peer review article alluding to the efficacy of the Final Coat technology in inhibiting corrosion on automobiles was published in 2017.
The results achieved by both these electronic corrosion inhibitor devices point to the need for further research and testing in order to better understand how these devices are able to generate a shift in the potential of the metal panels, i.e., a cathodic effect, in the absence of a continuous electrolytic path required to close the electrical circuit between the positive and the negative poles, in accordance with accepted principles of cathodic protection.
Testing
Electrode potential is measured with reference electrodes. Copper-copper sulphate electrodes are used for structures in contact with soil or fresh water. Silver/silver chloride/seawater electrodes or pure zinc electrodes are used for seawater applications. The methods are described in EN 13509:2003 and NACE TM0497 along with the sources of error in the voltage that appears on the display of the meter. Interpretation of electrode potential measurements to determine the potential at the interface between the anode of the corrosion cell and the electrolyte requires training and cannot be expected to match the accuracy of measurements done in laboratory work.
Problems
Production of hydrogen
A side effect of improperly applied cathodic protection is the production of atomic hydrogen, leading to its absorption in the protected metal and subsequent hydrogen embrittlement of welds and materials with high hardness. Under normal conditions, the atomic hydrogen will combine at the metal surface to create hydrogen gas, which cannot penetrate the metal. Hydrogen atoms, however, are small enough to pass through the crystalline steel structure, and can lead in some cases to hydrogen embrittlement.
Cathodic disbonding
This is a process of disbondment of protective coatings from the protected structure (cathode) due to the formation of hydrogen ions over the surface of the protected material (cathode). Disbonding can be exacerbated by an increase in alkali ions and an increase in cathodic polarization. The degree of disbonding is also reliant on the type of coating, with some coatings affected more than others. Cathodic protection systems should be operated so that the structure does not become excessively polarized, since this also promotes disbonding due to excessively negative potentials. Cathodic disbonding occurs rapidly in pipelines that contain hot fluids because the process is accelerated by heat flow.
Cathodic shielding
Effectiveness of cathodic protection (CP) systems on steel pipelines can be impaired by the use of solid film backed dielectric coatings such as polyethylene tapes, shrinkable pipeline sleeves, and factory applied single or multiple solid film coatings. This phenomenon occurs because of the high electrical resistivity of these film backings. Protective electric current from the cathodic protection system is blocked or shielded from reaching the underlying metal by the highly resistive film backing. Cathodic shielding was first defined in the 1980s as being a problem, and technical papers on the subject have been regularly published since then.
A 1999 report concerning a spill from a Saskatchewan crude oil line contains an excellent definition of the cathodic shielding problem:
"The triple situation of disbondment of the (corrosion) coating, the dielectric nature of the coating and the unique electrochemical environment established under the exterior coating, which acts as a shield to the electrical CP current, is referred to as CP shielding. The combination of tenting and disbondment permits a corrosive environment around the outside of the pipe to enter into the void between the exterior coating and the pipe surface. With the development of this CP shielding phenomenon, impressed current from the CP system cannot access exposed metal under the exterior coating to protect the pipe surface from the consequences of an aggressive corrosive environment. The CP shielding phenomenon induces changes in the potential gradient of the CP system across the exterior coating, which are further pronounced in areas of insufficient or sub-standard CP current emanating from the pipeline's CP system. This produces an area on the pipeline of insufficient CP defense against metal loss aggravated by an exterior corrosive environment."
Cathodic shielding is referenced in a number of the standards listed below. Newly issued USDOT regulation Title 49 CFR 192.112, in the section for Additional design requirements for steel pipe using alternative maximum allowable operating pressure requires that "The pipe must be protected against external corrosion by a non-shielding coating" (see coatings section on standard). Also, the NACE SP0169:2007 standard defines shielding in section 2, cautions against the use of materials that create electrical shielding in section 4.2.3, cautions against use of external coatings that create electrical shielding in section 5.1.2.3, and instructs readers to take 'appropriate action' when the effects of electrical shielding of cathodic protection current are detected on an operating pipeline in section 10.9.
Certification
In many countries, having a related CP certificate is recommended or, in some cases, mandatory for doing a CP job, from field test to design. There are different certification bodies and evaluation methods, but two of them are more common: AMPP certification and ISO 15257.
AMPP cathodic protection certification has four levels: Tester, Technician, Technologist, and Specialist.
ISO 15257 has five levels: Four levels close to the AMPP definition, plus another level for those who have made a scientific contribution.
Countries
France
The main center for cathodic protection certification in France and some French language countries is CEFRACOR.
Germany
fkks cert GmbH (owned by fkks: Fachverband Kathodischer Korrosionsschutz e.V., trans. German Professional Association for Cathodic Protection specialists) is an accredited certification scheme in the field of cathodic corrosion protection.
Italy
Three different bodies will provide cathodic protection certificates based on ISO 15257: APCERT, CICPND, and RINA.
UK
Institute of Corrosion (ICorr), Corrosion Prevention Association (CPA), and TWI offer a Cathodic Protection, Training, Assessment, and Certification Scheme that evaluates the competence levels of cathodic protection personnel.
US
AMPP is the main certification body. Moreover, AMPP is highly active and known in the Middle East.
Standards
49 CFR 192.451 - Requirements for Corrosion Control - Transportation of natural and other gas by pipeline: US minimum federal safety standards
49 CFR 195.551 - Requirements for Corrosion Control - Transportation of hazardous liquids by pipelines: US minimum federal safety standards
AS 2832 - Australian Standards for Cathodic Protection
ASME B31Q 0001-0191
ASTM G 8, G 42 - Evaluating Cathodic Disbondment resistance of coatings
DNV-RP-B401 - Cathodic Protection Design - Det Norske Veritas
EN 12068:1999 - Cathodic protection. External organic coatings for the corrosion protection of buried or immersed steel pipelines used in conjunction with cathodic protection. Tapes and shrinkable materials
EN 12473:2000 - General principles of cathodic protection in sea water
EN 12474:2001 - Cathodic protection for submarine pipelines
EN 12495:2000 - Cathodic protection for fixed steel offshore structures
EN 12499:2003 - Internal cathodic protection of metallic structures
EN 12696:2012 - Cathodic protection of steel in concrete
EN 12954:2001 - Cathodic protection of buried or immersed metallic structures. General principles and application for pipelines
EN 13173:2001 - Cathodic protection for steel offshore floating structures
EN 13174:2001 - Cathodic protection for "Harbour Installations".
EN 13509:2003 - Cathodic protection measurement techniques
EN 13636:2004 - Cathodic protection of buried metallic tanks and related piping
EN 14505:2005 - Cathodic protection of complex structures
EN 15112:2006 - External cathodic protection of well casing
EN 15280-2013 - Evaluation of a.c. corrosion likelihood of buried pipelines
EN 50162:2004 - Protection against corrosion by stray current from direct current systems
BS 7361-1:1991 - Cathodic Protection
NACE SP0169:2013 - Control of External Corrosion on Underground or Submerged Metallic Piping Systems
NACE TM 0497 - Measurement Techniques Related to Criteria for Cathodic Protection on Underground or Submerged Metallic Piping Systems
See also
Anodic protection
Cathodic modification
Corrosion engineering
Redox
Wetting voltage
References
Publications and further reading
A.W. Peabody, Peabody's Control of Pipeline Corrosion, 2nd Ed., 2001, NACE International.
Davy, H., Phil. Trans. Roy. Soc., 114,151,242 and 328 (1824)
Ashworth V., Corrosion Vol. 2, 3rd Ed., 1994,
Baeckmann, Schwenck & Prinz, Handbook of Cathodic Corrosion Protection, 3rd Edition 1997.
Scherer, L. F., Oil and Gas Journal, (1939)
ASTM B843 - 07 Standard Specification for Magnesium Alloy Anodes for Cathodic Protection
ASTM B418 - 09 Standard Specification for Cast and Wrought Galvanic Zinc Anodes
Roberge, Pierre R, Handbook of Corrosion Engineering 1999
NACE International Paper 09043 Coatings Used in Conjunction with Cathodic Protection - Shielding vs Non-shielding Coatings
NACE International TM0497-2002, Measurement Techniques Related to Criteria for Cathodic Protection on Underground or Submerged Metallic Piping Systems
Transportation Safety Board of Canada, Report Number P99H0021, 1999
Covino, Bernard S, et al., Performance of Zinc Anodes for Cathodic Protection of Reinforced Concrete Bridges, Oregon Dept of Transport & Federal Highway Administration, March 2002
UK Highways Agency BA 83/02; Design Manual for Roads and Bridges, Vol.3, Sect.3, Part 3, Cathodic Protection For Use In Reinforced Concrete Highway Structures. (Retrieved 2011-01-04)
Daily, Steven F, Using Cathodic Protection to Control Corrosion of Reinforced Concrete Structures in Marine Environments (published in Port Technology International)
Gummow, RA, Corrosion Control of Municipal Infrastructure Using Cathodic Protection. NACE Conference Oct 1999, NACE Materials Performance Feb 2000
EN 12473:2000 - General principles of cathodic protection in sea water
EN 12499:2003 - Internal cathodic protection of metallic structures
NACE RP0100-2000 Cathodic Protection of Prestressed Concrete Cylinder Pipelines
BS 7361-1:1991 - Cathodic Protection
SAE International Paper No. 912270 Robert Baboian, State of the Art in Automobile Cathodic Protection, Proceedings of the 5th Automotive Corrosion and Prevention Conference, P-250, Warrendale, PA, USA, August 1991
US Army Corps of Engineers, Engineering manual 1110-2-2704, 12 July 2004
External links
NACE International (formerly the National Association of Corrosion Engineers) - Introduction to Cathodic Protection
Institute of Corrosion - A technical society based in the UK
Glossary - A comprehensive glossary of cathodic protection and corrosion terms
Cathodic Protection 101 - Cathodic Protection 101, a beginner's guide
National Physics Laboratory - Short introductory paper on cathodic protection
USDOT CFR 192.112 - USDOT regulations CFR 192.112 requiring the use on non-shielding corrosion coating systems on steel pipe using alternative maximum allowable operation pressure.
Chemical processes
Corrosion prevention
Hydrogen technologies | Cathodic protection | [
"Chemistry"
] | 6,292 | [
"Corrosion prevention",
"Corrosion",
"Chemical processes",
"nan",
"Chemical process engineering"
] |
254,777 | https://en.wikipedia.org/wiki/Isometry | In mathematics, an isometry (or congruence, or congruent transformation) is a distance-preserving transformation between metric spaces, usually assumed to be bijective. The word isometry is derived from the Ancient Greek: ἴσος isos meaning "equal", and μέτρον metron meaning "measure". If the transformation is from a metric space to itself, it is a kind of geometric transformation known as a motion.
Introduction
Given a metric space (loosely, a set and a scheme for assigning distances between elements of the set), an isometry is a transformation which maps elements to the same or another metric space such that the distance between the image elements in the new metric space is equal to the distance between the elements in the original metric space.
In a two-dimensional or three-dimensional Euclidean space, two geometric figures are congruent if they are related by an isometry;
the isometry that relates them is either a rigid motion (translation or rotation), or a composition of a rigid motion and a reflection.
Isometries are often used in constructions where one space is embedded in another space. For instance, the completion of a metric space involves an isometry from into a quotient set of the space of Cauchy sequences on
The original space is thus isometrically isomorphic to a subspace of a complete metric space, and it is usually identified with this subspace.
Other embedding constructions show that every metric space is isometrically isomorphic to a closed subset of some normed vector space and that every complete metric space is isometrically isomorphic to a closed subset of some Banach space.
An isometric surjective linear operator on a Hilbert space is called a unitary operator.
Definition
Let and be metric spaces with metrics (e.g., distances) and A map is called an isometry or distance-preserving map if for any ,
An isometry is automatically injective; otherwise two distinct points, a and b, could be mapped to the same point, thereby contradicting the coincidence axiom of the metric d, i.e., if and only if . This proof is similar to the proof that an order embedding between partially ordered sets is injective. Clearly, every isometry between metric spaces is a topological embedding.
A global isometry, isometric isomorphism or congruence mapping is a bijective isometry. Like any other bijection, a global isometry has a function inverse.
The inverse of a global isometry is also a global isometry.
Two metric spaces X and Y are called isometric if there is a bijective isometry from X to Y.
The set of bijective isometries from a metric space to itself forms a group with respect to function composition, called the isometry group.
There is also the weaker notion of path isometry or arcwise isometry:
A path isometry or arcwise isometry is a map which preserves the lengths of curves; such a map is not necessarily an isometry in the distance preserving sense, and it need not necessarily be bijective, or even injective. This term is often abridged to simply isometry, so one should take care to determine from context which type is intended.
Examples
Any reflection, translation and rotation is a global isometry on Euclidean spaces. See also Euclidean group and .
The map in is a path isometry but not a (general) isometry. Note that unlike an isometry, this path isometry does not need to be injective.
Isometries between normed spaces
The following theorem is due to Mazur and Ulam.
Definition: The midpoint of two elements and in a vector space is the vector .
Linear isometry
Given two normed vector spaces and a linear isometry is a linear map that preserves the norms:
for all
Linear isometries are distance-preserving maps in the above sense.
They are global isometries if and only if they are surjective.
In an inner product space, the above definition reduces to
for all which is equivalent to saying that This also implies that isometries preserve inner products, as
.
Linear isometries are not always unitary operators, though, as those require additionally that and (i.e. the domain and codomain coincide and defines a coisometry).
By the Mazur–Ulam theorem, any isometry of normed vector spaces over is affine.
A linear isometry also necessarily preserves angles, therefore a linear isometry transformation is a conformal linear transformation.
Examples
A linear map from to itself is an isometry (for the dot product) if and only if its matrix is unitary.
Manifold
An isometry of a manifold is any (smooth) mapping of that manifold into itself, or into another manifold that preserves the notion of distance between points.
The definition of an isometry requires the notion of a metric on the manifold; a manifold with a (positive-definite) metric is a Riemannian manifold, one with an indefinite metric is a pseudo-Riemannian manifold. Thus, isometries are studied in Riemannian geometry.
A local isometry from one (pseudo-)Riemannian manifold to another is a map which pulls back the metric tensor on the second manifold to the metric tensor on the first. When such a map is also a diffeomorphism, such a map is called an isometry (or isometric isomorphism), and provides a notion of isomorphism ("sameness") in the category Rm of Riemannian manifolds.
Definition
Let and be two (pseudo-)Riemannian manifolds, and let be a diffeomorphism. Then is called an isometry (or isometric isomorphism) if
where denotes the pullback of the rank (0, 2) metric tensor by .
Equivalently, in terms of the pushforward we have that for any two vector fields on (i.e. sections of the tangent bundle ),
If is a local diffeomorphism such that then is called a local isometry.
Properties
A collection of isometries typically form a group, the isometry group. When the group is a continuous group, the infinitesimal generators of the group are the Killing vector fields.
The Myers–Steenrod theorem states that every isometry between two connected Riemannian manifolds is smooth (differentiable). A second form of this theorem states that the isometry group of a Riemannian manifold is a Lie group.
Symmetric spaces are important examples of Riemannian manifolds that have isometries defined at every point.
Generalizations
Given a positive real number ε, an ε-isometry or almost isometry (also called a Hausdorff approximation) is a map between metric spaces such that
for one has and
for any point there exists a point with
That is, an -isometry preserves distances to within and leaves no element of the codomain further than away from the image of an element of the domain. Note that -isometries are not assumed to be continuous.
The restricted isometry property characterizes nearly isometric matrices for sparse vectors.
Quasi-isometry is yet another useful generalization.
One may also define an element in an abstract unital C*-algebra to be an isometry:
is an isometry if and only if
Note that as mentioned in the introduction this is not necessarily a unitary element because one does not in general have that left inverse is a right inverse.
On a pseudo-Euclidean space, the term isometry means a linear bijection preserving magnitude. See also Quadratic spaces.
See also
Beckman–Quarles theorem
The second dual of a Banach space as an isometric isomorphism
Euclidean plane isometry
Flat (geometry)
Homeomorphism group
Involution
Isometry group
Motion (geometry)
Myers–Steenrod theorem
3D isometries that leave the origin fixed
Partial isometry
Scaling (geometry)
Semidefinite embedding
Space group
Symmetry in mathematics
Footnotes
References
Bibliography
Equivalence (mathematics)
Functions and mappings
Metric geometry
Riemannian geometry
Symmetry | Isometry | [
"Physics",
"Mathematics"
] | 1,672 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical objects",
"Mathematical relations",
"Geometry",
"Symmetry"
] |
255,047 | https://en.wikipedia.org/wiki/Nondestructive%20testing | Nondestructive testing (NDT) is any of a wide group of analysis techniques used in science and technology industry to evaluate the properties of a material, component or system without causing damage.
The terms nondestructive examination (NDE), nondestructive inspection (NDI), and nondestructive evaluation (NDE) are also commonly used to describe this technology.
Because NDT does not permanently alter the article being inspected, it is a highly valuable technique that can save both money and time in product evaluation, troubleshooting, and research. The six most frequently used NDT methods are eddy-current, magnetic-particle, liquid penetrant, radiographic, ultrasonic, and visual testing. NDT is commonly used in forensic engineering, mechanical engineering, petroleum engineering, electrical engineering, civil engineering, systems engineering, aeronautical engineering, medicine, and art. Innovations in the field of nondestructive testing have had a profound impact on medical imaging, including on echocardiography, medical ultrasonography, and digital radiography.
Non-Destructive Testing (NDT/ NDT testing) Techniques or Methodologies allow the investigator to carry out examinations without invading the integrity of the engineering specimen under observation while providing an elaborate view of the surface and structural discontinuities and obstructions. The personnel carrying out these methodologies require specialized NDT Training as they involve handling delicate equipment and subjective interpretation of the NDT inspection/NDT testing results.
NDT methods rely upon use of electromagnetic radiation, sound and other signal conversions to examine a wide variety of articles (metallic and non-metallic, food-product, artifacts and antiquities, infrastructure) for integrity, composition, or condition with no alteration of the article undergoing examination. Visual inspection (VT), the most commonly applied NDT method, is quite often enhanced by the use of magnification, borescopes, cameras, or other optical arrangements for direct or remote viewing. The internal structure of a sample can be examined for a volumetric inspection with penetrating radiation (RT), such as X-rays, neutrons or gamma radiation. Sound waves are utilized in the case of ultrasonic testing (UT), another volumetric NDT method – the mechanical signal (sound) being reflected by conditions in the test article and evaluated for amplitude and distance from the search unit (transducer). Another commonly used NDT method used on ferrous materials involves the application of fine iron particles (either suspended in liquid or dry powder – fluorescent or colored) that are applied to a part while it is magnetized, either continually or residually. The particles will be attracted to leakage fields of magnetism on or in the test object, and form indications (particle collection) on the object's surface, which are evaluated visually. Contrast and probability of detection for a visual examination by the unaided eye is often enhanced by using liquids to penetrate the test article surface, allowing for visualization of flaws or other surface conditions. This method (liquid penetrant testing) (PT) involves using dyes, fluorescent or colored (typically red), suspended in fluids and is used for non-magnetic materials, usually metals.
Analyzing and documenting a nondestructive failure mode can also be accomplished using a high-speed camera recording continuously (movie-loop) until the failure is detected. Detecting the failure can be accomplished using a sound detector or stress gauge which produces a signal to trigger the high-speed camera. These high-speed cameras have advanced recording modes to capture some non-destructive failures. After the failure the high-speed camera will stop recording. The captured images can be played back in slow motion showing precisely what happened before, during and after the nondestructive event, image by image.
Applications
NDT is used in a variety of settings that covers a wide range of industrial activity, with new NDT methods and applications, being continuously developed. Nondestructive testing methods are routinely applied in industries where a failure of a component would cause significant hazard or economic loss, such as in transportation, pressure vessels, building structures, piping, and hoisting equipment.
Weld verification
In manufacturing, welds are commonly used to join two or more metal parts. Because these connections may encounter loads and fatigue during product lifetime, there is a chance that they may fail if not created to proper specification. For example, the base metal must reach a certain temperature during the welding process, must cool at a specific rate, and must be welded with compatible materials or the joint may not be strong enough to hold the parts together, or cracks may form in the weld causing it to fail. The typical welding defects (lack of fusion of the weld to the base metal, cracks or porosity inside the weld, and variations in weld density) could cause a structure to break or a pipeline to rupture.
Welds may be tested using NDT techniques such as industrial radiography or industrial CT scanning using X-rays or gamma rays, ultrasonic testing, liquid penetrant testing, magnetic particle inspection or via eddy current. In a proper weld, these tests would indicate a lack of cracks in the radiograph, show clear passage of sound through the weld and back, or indicate a clear surface without penetrant captured in cracks.
Welding techniques may also be actively monitored with acoustic emission techniques before production to design the best set of parameters to use to properly join two materials. In the case of high stress or safety critical welds, weld monitoring will be employed to confirm the specified welding parameters (arc current, arc voltage, travel speed, heat input etc.) are being adhered to those stated in the welding procedure. This verifies the weld as correct to procedure prior to nondestructive evaluation and metallurgy tests.
Structural mechanics
Structure can be complex systems that undergo different loads during their lifetime, e.g. Lithium-ion batteries. Some complex structures, such as the turbo machinery in a liquid-fuel rocket, can also cost millions of dollars. Engineers will commonly model these structures as coupled second-order systems, approximating dynamic structure components with springs, masses, and dampers. The resulting sets of differential equations are then used to derive a transfer function that models the behavior of the system.
In NDT, the structure undergoes a dynamic input, such as the tap of a hammer or a controlled impulse. Key properties, such as displacement or acceleration at different points of the structure, are measured as the corresponding output. This output is recorded and compared to the corresponding output given by the transfer function and the known input. Differences may indicate an inappropriate model (which may alert engineers to unpredicted instabilities or performance outside of tolerances), failed components, or an inadequate control system.
Reference standards, which are structures that intentionally flawed in order to be compared with components intended for use in the field, are often used in NDT. Reference standards can be with many NDT techniques, such as UT, RT and VT.
Relation to medical procedures
Several NDT methods are related to clinical procedures, such as radiography, ultrasonic testing, and visual testing.
Technological improvements or upgrades in these NDT methods have migrated over from medical equipment advances, including digital radiography (DR), phased array ultrasonic testing (PAUT), and endoscopy (borescope or assisted visual inspection).
Notable events in academic and industrial NDT
1854 Hartford, Connecticut – A boiler at the Fales and Gray Car works explodes, killing 21 people and seriously injuring 50. Within a decade, the State of Connecticut passes a law requiring annual inspection (in this case visual) of boilers.
1880–1920 – The "Oil and Whiting" method of crack detection is used in the railroad industry to find cracks in heavy steel parts. (A part is soaked in thinned oil, then painted with a white coating that dries to a powder. Oil seeping out from cracks turns the white powder brown, allowing the cracks to be detected.) This was the precursor to modern liquid penetrant tests.
1895 – Wilhelm Conrad Röntgen discovers what are now known as X-rays. In his first paper he discusses the possibility of flaw detection.
1920 – Dr. H. H. Lester begins development of industrial radiography for metals.
1924 – Lester uses radiography to examine castings to be installed in a Boston Edison Company steam pressure power plant.
1926 – The first electromagnetic eddy current instrument is available to measure material thicknesses.
1927-1928 – Magnetic induction system to detect flaws in railroad track developed by Dr. Elmer Sperry and H.C. Drake.
1929 – Magnetic particle methods and equipment pioneered (A.V. DeForest and F.B. Doane.)
1930s – Robert F. Mehl demonstrates radiographic imaging using gamma radiation from Radium, which can examine thicker components than the low-energy X-ray machines available at the time.
1935–1940 – Liquid penetrant tests developed (Betz, Doane, and DeForest)
1935–1940s – Eddy current instruments developed (H.C. Knerr, C. Farrow, Theo Zuschlag, and Fr. F. Foerster).
1940–1944 – Ultrasonic test method developed in USA by Dr. Floyd Firestone, who applies for a U.S. invention patent for same on May 27, 1940 and is issued the U.S. patent as grant no. 2,280,226 on April 21, 1942. Extracts from the first two paragraphs of this seminal patent for a nondestructive testing method succinctly describe the basics of ultrasonic testing. "My invention pertains to a device for detecting the presence of inhomogeneities of density or elasticity in materials. For instance if a casting has a hole or a crack within it, my device allows the presence of the flaw to be detected and its position located, even though the flaw lies entirely within the casting and no portion of it extends out to the surface." Additionally, "The general principle of my device consists of sending high frequency vibrations into the part to be inspected, and the determination of the time intervals of arrival of the direct and reflected vibrations at one or more stations on the surface of the part." Medical echocardiography is an offshoot of this technology.
1946 – First neutron radiographs produced by Peters.
1950 – The Schmidt Hammer (also known as "Swiss Hammer") is invented. The instrument uses the world's first patented non-destructive testing method for concrete.
1950 – J. Kaiser introduces acoustic emission as an NDT method.
(Basic source for above: Hellier, 2001) Note the number of advancements made during the WWII era, a time when industrial quality control was growing in importance.
1955 – ICNDT founded. World organizing body for Nondestructive Testing.
1955 – First NDT World Conference takes place in Brussels, organized by ICNDT. NDT World Conference takes place every four years.
1963 – Frederick G. Weighart's and James F. McNulty (U.S. radio engineer)'s co-invention of Digital radiography is an offshoot of the pairs development of nondestructive test equipment at Automation Industries, Inc., then, in El Segundo, California. See James F. McNulty also at article Ultrasonic testing.
1996 – Rolf Diederichs founded the first Open Access NDT Journal in the Internet. Today the Open Access NDT Database NDT.net
1998 – The European Federation for Non-Destructive Testing (EFNDT) was founded in May 1998 in Copenhagen at the 7th European Conference for Non-Destructive Testing (ECNDT). 27 national European NDT societies joined the powerful organization.
2008 – NDT in Aerospace Conference was established DGZfP and Fraunhofer IIS hosted the first international congress in Bavaria, Germany.
2008 – Academia NDT International has been officially founded and has its base office in Brescia (Italy) www.academia-ndt.org
2012 – ISO 9712:2012 ISO Qualification and Certification of NDT Personnel
2020 – Indian Society for Non-destructive Testing (ISNT) Accreditation Certification from NABCB for Qualification and Certification of NDT Personnel as per ISO 9712:2012
ISO 9712
This ISO 9712 requirements for principles for the qualification and certification of personnel who perform industrial non-destructive testing(NDT).
The system specified in this International Standard can also apply to other NDT methods or to new techniques within an established NDT method, provided a comprehensive scheme of certification exists and the method or technique is covered by International, regional or national standards or the new NDT method or technique has been demonstrated to be effective to the satisfaction of the certification body.
The certification covers proficiency in one or more of the following methods: a) acoustic emission testing; b) eddy current testing; c) infrared thermographic testing; d) leak testing (hydraulic pressure tests excluded); e) magnetic testing; f) penetrant testing; g) radiographic testing; h) strain gauge testing; i) ultrasonic testing; j) visual testing (direct unaided visual tests and visual tests carried out during the application of another NDT method are excluded).
Methods and techniques
NDT is divided into various methods of nondestructive testing, each based on a particular scientific principle. These methods may be further subdivided into various techniques. The various methods and techniques, due to their particular natures, may lend themselves especially well to certain applications and be of little or no value at all in other applications. Therefore, choosing the right method and technique is an important part of the performance of NDT.
Acoustic emission testing (AE or AT)
Acoustic microscopy
Blue etch anodize (BEA)
Dye penetrant inspection or liquid penetrant testing (PT or LPI)
Electromagnetic testing (ET) or electromagnetic inspection (commonly known as "EMI")
Alternating current field measurement (ACFM)
Alternating current potential drop measurement (ACPD)
Barkhausen testing
Direct current potential drop measurement (DCPD)
Eddy-current testing (ECT)
Magnetic flux leakage testing (MFL) for pipelines, tank floors, and wire rope
Magnetic-particle inspection (MT or MPI)
Magnetovision
Remote field testing (RFT)
Ellipsometry
Endoscope inspection
Guided wave testing (GWT)
Hardness testing
Impulse excitation technique (IET)
Microwave imaging
Terahertz nondestructive evaluation (THz)
Infrared and thermal testing (IR)
Thermographic inspection
Scanning thermal microscopy
Laser testing
Electronic speckle pattern interferometry
Holographic interferometry
Self-mixing laser interferometry
Low coherence interferometry
Optical coherence tomography (OCT)
Profilometry
Shearography
Leak testing (LT) or Leak detection
Hydrostatic test
Absolute pressure leak testing (pressure change)
Bubble testing
Halogen diode leak testing
Hydrogen leak testing
Mass spectrometer leak testing
Tracer-gas leak testing method for helium, hydrogen and refrigerant gases
Machine vision based automatic inspection
Magnetic resonance imaging (MRI) and NMR spectroscopy
Metallographic replicas
Spectroscopy
Near-infrared spectroscopy (NIRS)
Mid-infrared spectroscopy (MIR)
(Far-infrared =) Terahertz spectroscopy
Raman Spectroscopy
Optical microscopy
Positive material identification (PMI)
Radiographic testing (RT) (see also Industrial radiography and Radiography)
Computed radiography
Digital radiography (real-time)
Neutron imaging
SCAR (small controlled area radiography)
X-ray computed tomography (CT)
Resonant inspection
Resonant acoustic method (RAM)
Scanning electron microscopy
Surface temper etch (Nital Etch)
Ultrasonic testing (UT)
Acoustic resonance technology (ART)
Angle beam testing
Electromagnetic acoustic transducer (EMAT) (non-contact)
Laser ultrasonics (LUT)
Internal rotary inspection system (IRIS) ultrasonics for tubes
Phased array ultrasonics (PAUT)
Thickness measurement
Time of flight diffraction ultrasonics (TOFD)
Time-of-flight ultrasonic determination of 3D elastic constants (TOF)
Vibration analysis
Visual inspection (VT)
Pipeline video inspection
Weight and load testing of structures
Corroscan/C-scan
3D computed tomography
Industrial CT scanning
Heat Exchanger Life Assessment System
RTJ Flange Special Ultrasonic Testing
Personnel training, qualification and certification
Successful and consistent application of nondestructive testing techniques depends heavily on personnel training, experience and integrity. Personnel involved in application of industrial NDT methods and interpretation of results should be certified, and in some industrial sectors certification is enforced by law or by the applied codes and standards.
NDT professionals and managers who seek to further their growth, knowledge and experience to remain competitive in the rapidly advancing technology field of nondestructive testing should consider joining NDTMA, a member organization of NDT Managers and Executives who work to provide a forum for the open exchange of managerial, technical and regulatory information critical to the successful management of NDT personnel and activities. Their annual conference at the Golden Nugget in Las Vegas is a popular for its informative and relevant programming and exhibition space
Certification schemes
There are two approaches in personnel certification:
Employer Based Certification: Under this concept the employer compiles their own Written Practice. The written practice defines the responsibilities of each level of certification, as implemented by the company, and describes the training, experience and examination requirements for each level of certification. In industrial sectors the written practices are usually based on recommended practice SNT-TC-1A of the American Society for Nondestructive Testing. ANSI standard CP-189 outlines requirements for any written practice that conforms to the standard. For aviation, space, and defense (ASD) applications NAS 410 sets further requirements for NDT personnel, and is published by AIA – Aerospace Industries Association, which is made up of US aerospace airframe and powerplant manufacturers. This is the basis document for EN 4179 and other (USA) NIST-recognized aerospace standards for the Qualification and Certification (employer-based) of Nondestructive Testing personnel. NAS 410 also sets the requirements also for "National NDT Boards", which allow and proscribe personal certification schemes. NAS 410 allows ASNT Certification as a portion of the qualifications needed for ASD certification.
Personal Central Certification: The concept of central certification is that an NDT operator can obtain certification from a central certification authority, that is recognized by most employers, third parties and/or government authorities. Industrial standards for central certification schemes include ISO 9712, and ANSI/ASNT CP-106 (used for the ASNT ACCP scheme). Certification under these standards involves training, work experience under supervision and passing a written and practical examination set up by the independent certification authority. EN 473 was another central certification scheme, very similar to ISO 9712, which was withdrawn when CEN replaced it with EN ISO 9712 in 2012.
In the United States employer based schemes are the norm, however central certification schemes exist as well. The most notable is ASNT Level III (established in 1976–1977), which is organized by the American Society for Nondestructive Testing for Level 3 NDT personnel. NAVSEA 250-1500 is another US central certification scheme, specifically developed for use in the naval nuclear program.
Central certification is more widely used in the European Union, where certifications are issued by accredited bodies (independent organizations conforming to ISO 17024 and accredited by a national accreditation authority like UKAS). The Pressure Equipment Directive (97/23/EC) actually enforces central personnel certification for the initial testing of steam boilers and some categories of pressure vessels and piping. European Standards harmonized with this directive specify personnel certification to EN 473. Certifications issued by a national NDT society which is a member of the European Federation of NDT (EFNDT) are mutually acceptable by the other member societies under a multilateral recognition agreement.
Canada also implements an ISO 9712 central certification scheme, which is administered by Natural Resources Canada, a government department.
The aerospace sector worldwide sticks to employer based schemes. In America it is based mostly on the Aerospace Industries Association's (AIA) AIA-NAS-410 and in the European Union on the equivalent and very similar standard EN 4179. However EN 4179:2009 includes an option for central qualification and certification by a National aerospace NDT board or NANDTB (paragraph 4.5.2).
Levels of certification
Most NDT personnel certification schemes listed above specify three "levels" of qualification and/or certification, usually designated as Level 1, Level 2 and Level 3 (although some codes specify Roman numerals, like Level II). The roles and responsibilities of personnel in each level are generally as follows (there are slight differences or variations between different codes and standards):
Level 1 are technicians qualified to perform only specific calibrations and tests under close supervision and direction by higher level personnel. They can only report test results. Normally they work following specific work instructions for testing procedures and rejection criteria.
Level 2 are engineers or experienced technicians who are able to set up and calibrate testing equipment, conduct the inspection according to codes and standards (instead of following work instructions) and compile work instructions for Level 1 technicians. They are also authorized to report, interpret, evaluate and document testing results. They can also supervise and train Level 1 technicians. In addition to testing methods, they must be familiar with applicable codes and standards and have some knowledge of the manufacture and service of tested products.
Level 3 are usually specialized engineers or very experienced technicians. They can establish NDT techniques and procedures and interpret codes and standards. They also direct NDT laboratories and have central role in personnel certification. They are expected to have wider knowledge covering materials, fabrication and product technology.
Terminology
The standard US terminology for Nondestructive testing is defined in standard ASTM E-1316. Some definitions may be different in European standard EN 1330.
Indication The response or evidence from an examination, such as a blip on the screen of an instrument. Indications are classified as true or false. False indications are those caused by factors not related to the principles of the testing method or by improper implementation of the method, like film damage in radiography, electrical interference in ultrasonic testing etc. True indications are further classified as relevant and non relevant. Relevant indications are those caused by flaws. Non relevant indications are those caused by known features of the tested object, like gaps, threads, case hardening etc.
Interpretation Determining if an indication is of a type to be investigated. For example, in electromagnetic testing, indications from metal loss are considered flaws because they should usually be investigated, but indications due to variations in the material properties may be harmless and nonrelevant.
Flaw A type of discontinuity that must be investigated to see if it is rejectable. For example, porosity in a weld or metal loss.
Evaluation Determining if a flaw is rejectable. For example, is porosity in a weld larger than acceptable by code?
Defect A flaw that is rejectable – i.e. does not meet acceptance criteria. Defects are generally removed or repaired.
Reliability and statistics
Probability of detection (POD) tests are a standard way to evaluate a nondestructive testing technique in a given set of circumstances, for example "What is the POD of lack of fusion flaws in pipe welds using manual ultrasonic testing?" The POD will usually increase with flaw size. A common error in POD tests is to assume that the percentage of flaws detected is the POD, whereas the percentage of flaws detected is merely the first step in the analysis. Since the number of flaws tested is necessarily a limited number (non-infinite), statistical methods must be used to determine the POD for all possible defects, beyond the limited number tested. Another common error in POD tests is to define the statistical sampling units (test items) as flaws, whereas a true sampling unit is an item that may or may not contain a flaw. Guidelines for correct application of statistical methods to POD tests can be found in ASTM E2862 Standard Practice for Probability of Detection Analysis for Hit/Miss Data and MIL-HDBK-1823A Nondestructive Evaluation System Reliability Assessment, from the U.S. Department of Defense Handbook.
See also
References
Bibliography
ASTM International, ASTM Volume 03.03 Nondestructive Testing
ASTM E1316-13a: "Standard Terminology for Nondestructive Examinations" (2013)
ASNT, Nondestructive Testing Handbook
Bray, D.E. and R.K. Stanley, 1997, Nondestructive Evaluation: A Tool for Design, Manufacturing and Service; CRC Press, 1996.
Shull, P.J., Nondestructive Evaluation: Theory, Techniques, and Applications, Marcel Dekker Inc., 2002.
EN 1330: Non-destructive testing. Terminology. Nine parts. Parts 5 and 6 replaced by equivalent ISO standards.
EN 1330-1: Non-destructive testing. Terminology. List of general terms (1998)
EN 1330-2: Non-destructive testing. Terminology. Terms common to the non-destructive testing methods (1998)
EN 1330-3: Non-destructive testing. Terminology. Terms used in industrial radiographic testing (1997)
EN 1330-4: Non-destructive testing. Terminology. Terms used in ultrasonic testing (2010)
EN 1330-7: Non-destructive testing. Terminology. Terms used in magnetic particle testing (2005)
EN 1330-8: Non-destructive testing. Terminology. Terms used in leak tightness testing (1998)
EN 1330-9: Non-destructive testing. Terminology. Terms used in acoustic emission testing (2009)
EN 1330-10: Non-destructive testing. Terminology. Terms used in visual testing (2003)
EN 1330-11: Non-destructive testing. Terminology. Terms used in X-ray diffraction from polycrystalline and amorphous materials (2007)
ISO 12706: Non-destructive testing. Penetrant testing. Vocabulary (2009)
ISO 12718: Non-destructive testing. Eddy current testing. Vocabulary (2008)
External links
Maintenance
Quality control
Product testing
Product certification
Materials science
Materials testing
Tests | Nondestructive testing | [
"Physics",
"Materials_science",
"Engineering"
] | 5,371 | [
"Applied and interdisciplinary physics",
"Materials science",
"Nondestructive testing",
"Materials testing",
"nan",
"Mechanical engineering",
"Maintenance"
] |
255,051 | https://en.wikipedia.org/wiki/Destructive%20testing | In destructive testing (or destructive physical analysis, DPA) tests are carried out to the specimen's failure, in order to understand a specimen's performance or material behavior under different loads. These tests are generally much easier to carry out, yield more information, and are easier to interpret than nondestructive testing.
Applications
Destructive testing is most suitable, and economic, for objects which will be mass-produced, as the cost of destroying a small number of specimens is negligible. It is usually not economical to do destructive testing where only one or very few items are to be produced (for example, in the case of a building).
Analyzing and documenting destructive failure mode
Analyzing and documenting the destructive failure mode is often accomplished using a high-speed camera recording continuously (movie-loop) until the failure is detected. Detecting the failure can be accomplished using a sound detector or stress gauge which produces a signal to trigger the high-speed camera. These high-speed cameras have advanced recording modes to capture almost any type of destructive failure. After the failure the high-speed camera will stop recording. The captured images can be played back in slow motion showing precisely what happens before, during and after the destructive event, image by image.
Methods and techniques
Testing of large structures
Building structures or large nonbuilding structures (such as dams and bridges) are rarely subjected to destructive testing due to the prohibitive cost of constructing a building, or a scale model of a building, just to destroy it.
Earthquake engineering requires a good understanding of how structures will perform at earthquakes. Destructive tests are more frequently carried out for structures which are to be constructed in earthquake zones. Such tests are sometimes referred to as crash tests, and they are carried out to verify the designed seismic performance of a new building, or the actual performance of an existing building. The tests are, mostly, carried out on a platform called a shake-table which is designed to shake in the same manner as an earthquake. Results of those tests often include the corresponding shake-table videos.
Testing of structures in earthquakes is increasingly done by modelling the structure using specialist finite element software.
Software testing
Destructive software testing is a type of software testing which attempts to cause a piece of software to fail in an uncontrolled manner, in order to test its robustness and to help establish range limits, within which the software will operate in a stable and reliable manner.
Automotive testing
Automobiles are subject to crash testing by both automobile manufacturers and a variety of agencies.
Individual manufacturers also carry out sample testing to verify non destructive line side tests, usually by means of ultrasonic inspection.
Aircraft testing
There has also been extensive destructive testing of passenger and military aircraft, conducted by aircraft manufacturers and organizations like NASA. The 2012 Boeing 727 crash experiment was conducted and filmed by the Discovery channel. It is now standard procedure to test to destruction the first few production models of new airplanes by loading various components until they fail. The 1951 movie, No Highway in the Sky starring James Stewart and Marlene Dietrich told the story of an eccentric engineer who pioneered research into destructive testing of complete components against a great deal of skepticism.
See also
Crash test
Hardness tests
Median lethal dose
Metallographic test
Nondestructive testing
Reproducibility
Show and Display
Stress tests
Testability
References
Mechanical tests
Earthquake engineering
Product testing
de:Werkstoffprüfung | Destructive testing | [
"Engineering"
] | 673 | [
"Structural engineering",
"Mechanical tests",
"Civil engineering",
"Mechanical engineering",
"Earthquake engineering"
] |
255,217 | https://en.wikipedia.org/wiki/Dynamo%20theory | In physics, the dynamo theory proposes a mechanism by which a celestial body such as Earth or a star generates a magnetic field. The dynamo theory describes the process through which a rotating, convecting, and electrically conducting fluid can maintain a magnetic field over astronomical time scales. A dynamo is thought to be the source of the Earth's magnetic field and the magnetic fields of Mercury and the Jovian planets.
History of theory
When William Gilbert published de Magnete in 1600, he concluded that the Earth is magnetic and proposed the first hypothesis for the origin of this magnetism: permanent magnetism such as that found in lodestone. In 1822, André-Marie Ampère proposed that internal currents are responsible for Earth's magnetism. In 1919, Joseph Larmor proposed that a dynamo might be generating the field. However, even after he advanced his hypothesis, some prominent scientists advanced alternative explanations. The Nobel Prize winner Patrick Blackett did a series of experiments looking for a fundamental relation between angular momentum and magnetic moment, but found none.
Walter M. Elsasser, considered a "father" of the presently accepted dynamo theory as an explanation of the Earth's magnetism, proposed that this magnetic field resulted from electric currents induced in the fluid outer core of the Earth. He revealed the history of the Earth's magnetic field through pioneering the study of the magnetic orientation of minerals in rocks.
In order to maintain the magnetic field against ohmic decay (which would occur for the dipole field in 20,000 years), the outer core must be convecting. The convection is likely some combination of thermal and compositional convection. The mantle controls the rate at which heat is extracted from the core. Heat sources include gravitational energy released by the compression of the core, gravitational energy released by the rejection of light elements (probably sulfur, oxygen, or silicon) at the inner core boundary as it grows, latent heat of crystallization at the inner core boundary, and radioactivity of potassium, uranium and thorium.
At the dawn of the 21st century, numerical modeling of the Earth's magnetic field has not been successfully demonstrated. Initial models are focused on field generation by convection in the planet's fluid outer core. It was possible to show the generation of a strong, Earth-like field when the model assumed a uniform core-surface temperature and exceptionally high viscosities for the core fluid. Computations which incorporated more realistic parameter values yielded magnetic fields that were less Earth-like, but indicated that model refinements may ultimately lead to an accurate analytic model. Slight variations in the core-surface temperature, in the range of a few millikelvins, result in significant increases in convective flow and produce more realistic magnetic fields.
Formal definition
Dynamo theory describes the process through which a rotating, convecting, and electrically conducting fluid acts to maintain a magnetic field. This theory is used to explain the presence of anomalously long-lived magnetic fields in astrophysical bodies. The conductive fluid in the geodynamo is liquid iron in the outer core, and in the solar dynamo is ionized gas at the tachocline. Dynamo theory of astrophysical bodies uses magnetohydrodynamic equations to investigate how the fluid can continuously regenerate the magnetic field.
It was once believed that the dipole, which comprises much of the Earth's magnetic field and is misaligned along the rotation axis by 11.3 degrees, was caused by permanent magnetization of the materials in the earth. This means that dynamo theory was originally used to explain the Sun's magnetic field in its relationship with that of the Earth. However, this hypothesis, which was initially proposed by Joseph Larmor in 1919, has been modified due to extensive studies of magnetic secular variation, paleomagnetism (including polarity reversals), seismology, and the solar system's abundance of elements. Also, the application of the theories of Carl Friedrich Gauss to magnetic observations showed that Earth's magnetic field had an internal, rather than external, origin.
There are three requisites for a dynamo to operate:
An electrically conductive fluid medium
Kinetic energy provided by planetary rotation
An internal energy source to drive convective motions within the fluid.
In the case of the Earth, the magnetic field is induced and constantly maintained by the convection of liquid iron in the outer core. A requirement for the induction of field is a rotating fluid. Rotation in the outer core is supplied by the Coriolis effect caused by the rotation of the Earth. The Coriolis force tends to organize fluid motions and electric currents into columns (also see Taylor columns) aligned with the rotation axis. Induction or generation of magnetic field is described by the induction equation:
where is velocity, is magnetic field, is time, and is the magnetic diffusivity with electrical conductivity and permeability. The ratio of the second term on the right hand side to the first term gives the magnetic Reynolds number, a dimensionless ratio of advection of magnetic field to diffusion.
Tidal heating supporting a dynamo
Tidal forces between celestial orbiting bodies cause friction that heats up their interiors. This is known as tidal heating, and it helps keep the interior in a liquid state. A liquid interior that can conduct electricity is required to produce a dynamo. Saturn's Enceladus and Jupiter's Io have enough tidal heating to liquify their inner cores, but they may not create a dynamo because they cannot conduct electricity. Mercury, despite its small size, has a magnetic field, because it has a conductive liquid core created by its iron composition and friction resulting from its highly elliptical orbit. It is theorized that the Moon once had a magnetic field, based on evidence from magnetized lunar rocks, due to its short-lived closer distance to Earth creating tidal heating. An orbit and rotation of a planet helps provide a liquid core, and supplements kinetic energy that supports a dynamo action.
Kinematic dynamo theory
In kinematic dynamo theory the velocity field is prescribed, instead of being a dynamic variable: The model makes no provision for the flow distorting in response to the magnetic field. This method cannot provide the time variable behaviour of a fully nonlinear chaotic dynamo, but can be used to study how magnetic field strength varies with the flow structure and speed.
Using Maxwell's equations simultaneously with the curl of Ohm's law, one can derive what is basically a linear eigenvalue equation for magnetic fields (), which can be done when assuming that the magnetic field is independent from the velocity field. One arrives at a critical magnetic Reynolds number, above which the flow strength is sufficient to amplify the imposed magnetic field, and below which the magnetic field dissipates.
Practical measure of possible dynamos
The most functional feature of kinematic dynamo theory is that it can be used to test whether a velocity field is or is not capable of dynamo action. By experimentally applying a certain velocity field to a small magnetic field, one can observe whether the magnetic field tends to grow (or not) in response to the applied flow. If the magnetic field does grow, then the system is either capable of dynamo action or is a dynamo, but if the magnetic field does not grow, then it is simply referred to as “not a dynamo”.
An analogous method called the membrane paradigm is a way of looking at black holes that allows for the material near their surfaces to be expressed in the language of dynamo theory.
Spontaneous breakdown of a topological supersymmetry
Kinematic dynamo can be also viewed as the phenomenon of the spontaneous breakdown of the topological supersymmetry of the associated stochastic differential equation related to the flow of the background matter. Within stochastic supersymmetric theory, this supersymmetry is an intrinsic property of all stochastic differential equations, its interpretation is that the model's phase space preserves continuity via continuous time flows. When the continuity of that flow spontaneously breaks down, the system is in the stochastic state of deterministic chaos. In other words, kinematic dynamo arises because of chaotic flow in the underlying background matter.
Nonlinear dynamo theory
The kinematic approximation becomes invalid when the magnetic field becomes strong enough to affect the fluid motions. In that case the velocity field becomes affected by the Lorentz force, and so the induction equation is no longer linear in the magnetic field. In most cases this leads to a quenching of the amplitude of the dynamo. Such dynamos are sometimes also referred to as hydromagnetic dynamos.
Virtually all dynamos in astrophysics and geophysics are hydromagnetic dynamos.
The main idea of the theory is that any small magnetic field existing in the outer core creates currents in the moving fluid there due to Lorentz force. These currents create further magnetic field due to Ampere's law. With the fluid motion, the currents are carried in a way that the magnetic field gets stronger (as long as is negative). Thus a "seed" magnetic field can get stronger and stronger until it reaches some value that is related to existing non-magnetic forces.
Numerical models are used to simulate fully nonlinear dynamos. The following equations are used:
The induction equation, presented above.
Maxwell's equations for negligible electric field:
The continuity equation for conservation of mass, for which the Boussinesq approximation is often used:
The Navier-Stokes equation for conservation of momentum, again in the same approximation, with the magnetic force and gravitation force as the external forces: where is the kinematic viscosity, is the mean density and is the relative density perturbation that provides buoyancy (for thermal convection where is coefficient of thermal expansion), is the rotation rate of the Earth, and is the electric current density.
A transport equation, usually of heat (sometimes of light element concentration): where is temperature, is the thermal diffusivity with thermal conductivity, heat capacity, and density, and is an optional heat source. Often the pressure is the dynamic pressure, with the hydrostatic pressure and centripetal potential removed.
These equations are then non-dimensionalized, introducing the non-dimensional parameters,
where is the Rayleigh number, the Ekman number, and the Prandtl and magnetic Prandtl number. Magnetic field scaling is often in Elsasser number units
Energy conversion between magnetic and kinematic energy
The scalar product of the above form of Navier-Stokes equation with gives the rate of increase of kinetic energy density, , on the left-hand side. The last term on the right-hand side is then , the local contribution to the kinetic energy due to Lorentz force.
The scalar product of the induction equation with gives the rate of increase of the magnetic energy density, , on the left-hand side. The last term on the right-hand side is then Since the equation is volume-integrated, this term is equivalent up to a boundary term (and with the double use of the scalar triple product identity) to (where one of Maxwell's equations was used). This is the local contribution to the magnetic energy due to fluid motion.
Thus the term is the rate of transformation of kinetic energy to magnetic energy. This has to be non-negative at least in part of the volume, for the dynamo to produce magnetic field.
From the diagram above, it is not clear why this term should be positive. A simple argument can be based on consideration of net effects. To create the magnetic field, the net electric current must wrap around the axis of rotation of the planet. In that case, for the term to be positive, the net flow of conducting matter must be towards the axis of rotation. The diagram only shows a net flow from the poles to the equator. However mass conservation requires an additional flow from the equator toward the poles. If that flow was along the axis of rotation, that implies the circulation would be completed by a flow from the ones shown towards the axis of rotation, producing the desired effect.
Order of magnitude of the magnetic field created by Earth's dynamo
The above formula for the rate of conversion of kinetic energy to magnetic energy, is equivalent to a rate of work done by a force of on the outer core matter, whose velocity is . This work is the result of non-magnetic forces acting on the fluid.
Of those, the gravitational force and the centrifugal force are conservative and therefore have no overall contribution to fluid moving in closed loops. Ekman number (defined above), which is the ratio between the two remaining forces, namely the viscosity and Coriolis force, is very low inside Earth's outer core, because its viscosity is low (1.2–1.5 ×10 pascal-second) due to its liquidity.
Thus the main time-averaged contribution to the work is from Coriolis force, whose size is though this quantity and are related only indirectly and are not in general equal locally (thus they affect each other but not in the same place and time).
The current density is itself the result of the magnetic field according to Ohm's law. Again, due to matter motion and current flow, this is not necessarily the field at the same place and time. However these relations can still be used to deduce orders of magnitude of the quantities in question.
In terms of order of magnitude, and , giving or:
The exact ratio between both sides is the square root of Elsasser number.
Note that the magnetic field direction cannot be inferred from this approximation (at least not its sign) as it appears squared, and is, indeed, sometimes reversed, though in general it lies on a similar axis to that of .
For earth outer core, is approximately 104 kg/m3, = 2/day = 7.3×10−5/second and is approximately 107Ω−1m−1 .
This gives 2.7×10−4 Tesla.
The magnetic field of a magnetic dipole has an inverse cubic dependence in distance, so its order of magnitude at the earth surface can be approximated by multiplying the above result with giving 2.5×10−5 Tesla, not far from the measured value of 3×10−5 Tesla at the equator.
Numerical models
Broadly, models of the geodynamo attempt to produce magnetic fields consistent with observed data given certain conditions and equations as mentioned in the sections above. Implementing the magnetohydrodynamic equations successfully was of particular significance because they pushed dynamo models to self-consistency. Though geodynamo models are especially prevalent, dynamo models are not necessarily restricted to the geodynamo; solar and general dynamo models are also of interest. Studying dynamo models has utility in the field of geophysics as doing so can identify how various mechanisms form magnetic fields like those produced by astrophysical bodies like Earth and how they cause magnetic fields to exhibit certain features, such as pole reversals.
The equations used in numerical models of dynamo are highly complex. For decades, theorists were confined to two dimensional kinematic dynamo models described above, in which the fluid motion is chosen in advance and the effect on the magnetic field calculated. The progression from linear to nonlinear, three dimensional models of dynamo was largely hindered by the search for solutions to magnetohydrodynamic equations, which eliminate the need for many of the assumptions made in kinematic models and allow self-consistency.
The first self-consistent dynamo models, ones that determine both the fluid motions and the magnetic field, were developed by two groups in 1995, one in Japan and one in the United States. The latter was made as a model with regards to the geodynamo and received significant attention because it successfully reproduced some of the characteristics of the Earth's field. Following this breakthrough, there was a large swell in development of reasonable, three dimensional dynamo models.
Though many self-consistent models now exist, there are significant differences among the models, both in the results they produce and the way they were developed. Given the complexity of developing a geodynamo model, there are many places where discrepancies can occur such as when making assumptions involving the mechanisms that provide energy for the dynamo, when choosing values for parameters used in equations, or when normalizing equations. In spite of the many differences that may occur, most models have shared features like clear axial dipoles. In many of these models, phenomena like secular variation and geomagnetic polarity reversals have also been successfully recreated.
Observations
Many observations can be made from dynamo models. Models can be used to estimate how magnetic fields vary with time and can be compared to observed paleomagnetic data to find similarities between the model and the Earth. Due to the uncertainty of paleomagnetic observations, however, comparisons may not be entirely valid or useful. Simplified geodynamo models have shown relationships between the dynamo number (determined by variance in rotational rates in the outer core and mirror-asymmetric convection (e.g. when convection favors one direction in the north and the other in the south)) and magnetic pole reversals as well as found similarities between the geodynamo and the Sun's dynamo. In many models, it appears that magnetic fields have somewhat random magnitudes that follow a normal trend that average to zero. In addition to these observations, general observations about the mechanisms powering the geodynamo can be made based on how accurately the model reflects actual data collected from Earth.
Modern modelling
The complexity of dynamo modelling is so great that models of the geodynamo are limited by the current power of supercomputers, particularly because calculating the Ekman and Rayleigh number of the outer core is extremely difficult and requires a vast number of computations.
Many improvements have been proposed in dynamo modelling since the self-consistent breakthrough in 1995. One suggestion in studying the complex magnetic field changes is applying spectral methods to simplify computations. Ultimately, until considerable improvements in computer power are made, the methods for computing realistic dynamo models will have to be made more efficient, so making improvements in methods for computing the model is of high importance for the advancement of numerical dynamo modelling.
Notable people
Stanislav I. Braginsky, research geophysicist
See also
Antidynamo theorem
Rotating magnetic field
Secular variation
References
Geomagnetism
Plasma theory and modeling
Magnetohydrodynamics
Structure of the Earth
Computational physics
Unsolved problems in physics
Magnetism in astronomy | Dynamo theory | [
"Physics",
"Chemistry",
"Astronomy"
] | 3,754 | [
"Plasma physics",
"Magnetohydrodynamics",
"Unsolved problems in physics",
"Computational physics",
"Plasma theory and modeling",
"Magnetism in astronomy",
"Fluid dynamics"
] |
255,244 | https://en.wikipedia.org/wiki/Seawater | Seawater, or sea water, is water from a sea or ocean. On average, seawater in the world's oceans has a salinity of about 3.5% (35 g/L, 35 ppt, 600 mM). This means that every kilogram (roughly one liter by volume) of seawater has approximately of dissolved salts (predominantly sodium () and chloride () ions). The average density at the surface is 1.025 kg/L. Seawater is denser than both fresh water and pure water (density 1.0 kg/L at ) because the dissolved salts increase the mass by a larger proportion than the volume. The freezing point of seawater decreases as salt concentration increases. At typical salinity, it freezes at about . The coldest seawater still in the liquid state ever recorded was found in 2010, in a stream under an Antarctic glacier: the measured temperature was .
Seawater pH is typically limited to a range between 7.5 and 8.4. However, there is no universally accepted reference pH-scale for seawater and the difference between measurements based on different reference scales may be up to 0.14 units.
Properties
Salinity
Although the vast majority of seawater has a salinity of between 31 and 38 g/kg, that is 3.1–3.8%, seawater is not uniformly saline throughout the world. Where mixing occurs with freshwater runoff from river mouths, near melting glaciers or vast amounts of precipitation (e.g. monsoon), seawater can be substantially less saline. The most saline open sea is the Red Sea, where high rates of evaporation, low precipitation and low river run-off, and confined circulation result in unusually salty water. The salinity in isolated bodies of water can be considerably greater still about ten times higher in the case of the Dead Sea. Historically, several salinity scales were used to approximate the absolute salinity of seawater. A popular scale was the "Practical Salinity Scale" where salinity was measured in "practical salinity units (PSU)". The current standard for salinity is the "Reference Salinity" scale with the salinity expressed in units of "g/kg".
Density
The density of surface seawater ranges from about 1020 to 1029 kg/m3, depending on the temperature and salinity. At a temperature of 25 °C, the salinity of 35 g/kg and 1 atm pressure, the density of seawater is 1023.6 kg/m3. Deep in the ocean, under high pressure, seawater can reach a density of 1050 kg/m3 or higher. The density of seawater also changes with salinity. Brines generated by seawater desalination plants can have salinities up to 120 g/kg. The density of typical seawater brine of 120 g/kg salinity at 25 °C and atmospheric pressure is 1088 kg/m3.
pH value
The pH value at the surface of oceans in pre-industrial time (before 1850) was around 8.2. Since then, it has been decreasing due to a human-caused process called ocean acidification that is related to carbon dioxide emissions: Between 1950 and 2020, the average pH of the ocean surface fell from approximately 8.15 to 8.05.
The pH value of seawater is naturally as low as 7.8 in deep ocean waters as a result of degradation of organic matter in these waters. It can be as high as 8.4 in surface waters in areas of high biological productivity.
Measurement of pH is complicated by the chemical properties of seawater, and several distinct pH scales exist in chemical oceanography. There is no universally accepted reference pH-scale for seawater and the difference between measurements based on different reference scales may be up to 0.14 units.
Chemical composition
Seawater contains more dissolved ions than all types of freshwater. However, the ratios of solutes differ dramatically. For instance, although seawater contains about 2.8 times more bicarbonate than river water, the percentage of bicarbonate in seawater as a ratio of all dissolved ions is far lower than in river water. Bicarbonate ions constitute 48% of river water solutes but only 0.14% for seawater. Differences like these are due to the varying residence times of seawater solutes; sodium and chloride have very long residence times, while calcium (vital for carbonate formation) tends to precipitate much more quickly. The most abundant dissolved ions in seawater are sodium, chloride, magnesium, sulfate and calcium. Its osmolarity is about 1000 mOsm/L.
Small amounts of other substances are found, including amino acids at concentrations of up to 2 micrograms of nitrogen atoms per liter, which are thought to have played a key role in the origin of life.
Microbial components
Research in 1957 by the Scripps Institution of Oceanography sampled water in both pelagic and neritic locations in the Pacific Ocean. Direct microscopic counts and cultures were used, the direct counts in some cases showing up to 10 000 times that obtained from cultures. These differences were attributed to the occurrence of bacteria in aggregates, selective effects of the culture media, and the presence of inactive cells. A marked reduction in bacterial culture numbers was noted below the thermocline, but not by direct microscopic observation. Large numbers of spirilli-like forms were seen by microscope but not under cultivation. The disparity in numbers obtained by the two methods is well known in this and other fields. In the 1990s, improved techniques of detection and identification of microbes by probing just small snippets of DNA, enabled researchers taking part in the Census of Marine Life to identify thousands of previously unknown microbes usually present only in small numbers. This revealed a far greater diversity than previously suspected, so that a litre of seawater may hold more than 20,000 species. Mitchell Sogin from the Marine Biological Laboratory feels that "the number of different kinds of bacteria in the oceans could eclipse five to 10 million."
Bacteria are found at all depths in the water column, as well as in the sediments, some being aerobic, others anaerobic. Most are free-swimming, but some exist as symbionts within other organisms – examples of these being bioluminescent bacteria. Cyanobacteria played an important role in the evolution of ocean processes, enabling the development of stromatolites and oxygen in the atmosphere.
Some bacteria interact with diatoms, and form a critical link in the cycling of silicon in the ocean. One anaerobic species, Thiomargarita namibiensis, plays an important part in the breakdown of hydrogen sulfide eruptions from diatomaceous sediments off the Namibian coast, and generated by high rates of phytoplankton growth in the Benguela Current upwelling zone, eventually falling to the seafloor.
Bacteria-like Archaea surprised marine microbiologists by their survival and thriving in extreme environments, such as the hydrothermal vents on the ocean floor. Alkalotolerant marine bacteria such as Pseudomonas and Vibrio spp. survive in a pH range of 7.3 to 10.6, while some species will grow only at pH 10 to 10.6. Archaea also exist in pelagic waters and may constitute as much as half the ocean's biomass, clearly playing an important part in oceanic processes. In 2000 sediments from the ocean floor revealed a species of Archaea that breaks down methane, an important greenhouse gas and a major contributor to atmospheric warming. Some bacteria break down the rocks of the sea floor, influencing seawater chemistry. Oil spills, and runoff containing human sewage and chemical pollutants have a marked effect on microbial life in the vicinity, as well as harbouring pathogens and toxins affecting all forms of marine life. The protist dinoflagellates may at certain times undergo population explosions called blooms or red tides, often after human-caused pollution. The process may produce metabolites known as biotoxins, which move along the ocean food chain, tainting higher-order animal consumers.
Pandoravirus salinus, a species of very large virus, with a genome much larger than that of any other virus species, was discovered in 2013. Like the other very large viruses Mimivirus and Megavirus, Pandoravirus infects amoebas, but its genome, containing 1.9 to 2.5 megabases of DNA, is twice as large as that of Megavirus, and it differs greatly from the other large viruses in appearance and in genome structure.
In 2013 researchers from Aberdeen University announced that they were starting a hunt for undiscovered chemicals in organisms that have evolved in deep sea trenches, hoping to find "the next generation" of antibiotics, anticipating an "antibiotic apocalypse" with a dearth of new infection-fighting drugs. The EU-funded research will start in the Atacama Trench and then move on to search trenches off New Zealand and Antarctica.
The ocean has a long history of human waste disposal on the assumption that its vast size makes it capable of absorbing and diluting all noxious material.
While this may be true on a small scale, the large amounts of sewage routinely dumped has damaged many coastal ecosystems, and rendered them life-threatening. Pathogenic viruses and bacteria occur in such waters, such as Escherichia coli, Vibrio cholerae the cause of cholera, hepatitis A, hepatitis E and polio, along with protozoans causing giardiasis and cryptosporidiosis. These pathogens are routinely present in the ballast water of large vessels, and are widely spread when the ballast is discharged.
Other parameters
The speed of sound in seawater is about 1,500 m/s (whereas the speed of sound is usually around 330 m/s in air at roughly 101.3 kPa pressure, 1 atmosphere), and varies with water temperature, salinity, and pressure. The thermal conductivity of seawater is 0.6 W/mK at 25 °C and a salinity of 35 g/kg.
The thermal conductivity decreases with increasing salinity and increases with increasing temperature.
Origin and history
The water in the sea was thought to come from the Earth's volcanoes, starting 4 billion years ago, released by degassing from molten rock. More recent work suggests much of the Earth's water may come from comets.
Scientific theories behind the origins of sea salt started with Sir Edmond Halley in 1715, who proposed that salt and other minerals were carried into the sea by rivers after rainfall washed it out of the ground. Upon reaching the ocean, these salts concentrated as more salt arrived over time (see Hydrologic cycle). Halley noted that most lakes that do not have ocean outlets (such as the Dead Sea and the Caspian Sea, see endorheic basin), have high salt content. Halley termed this process "continental weathering".
Halley's theory was partly correct. In addition, sodium leached out of the ocean floor when the ocean formed. The presence of salt's other dominant ion, chloride, results from outgassing of chloride (as hydrochloric acid) with other gases from Earth's interior via volcanos and hydrothermal vents. The sodium and chloride ions subsequently became the most abundant constituents of sea salt.
Ocean salinity has been stable for billions of years, most likely as a consequence of a chemical/tectonic system which removes as much salt as is deposited; for instance, sodium and chloride sinks include evaporite deposits, pore-water burial, and reactions with seafloor basalts.
Human impacts
Climate change, rising levels of carbon dioxide in Earth's atmosphere, excess nutrients, and pollution in many forms are altering global oceanic geochemistry. Rates of change for some aspects greatly exceed those in the historical and recent geological record. Major trends include an increasing acidity, reduced subsurface oxygen in both near-shore and pelagic waters, rising coastal nitrogen levels, and widespread increases in mercury and persistent organic pollutants. Most of these perturbations are tied either directly or indirectly to human fossil fuel combustion, fertilizer, and industrial activity. Concentrations are projected to grow in coming decades, with negative impacts on ocean biota and other marine resources.
One of the most striking features of this is ocean acidification, resulting from increased CO2 uptake of the oceans related to higher atmospheric concentration of CO2 and higher temperatures, because it severely affects coral reefs, mollusks, echinoderms and crustaceans (see coral bleaching).
Seawater is a means of transportation throughout the world. Every day plenty of ships cross the ocean to deliver goods to various locations around the world. Seawater is a tool for countries to efficiently participate in international commercial trade and transportation, but each ship exhausts emissions that can harm marine life, air quality of coastal areas. Seawater transportation is one of the fastest growing human generated greenhouse gas emissions. The emissions released from ships pose significant risks to human health in nearing areas as the oil and gas released from the operation of merchant ships decreases the air quality and causes more pollution both in the seawater and surrounding areas.
Another human use of seawater that has been considered is the use of seawater for agricultural purposes. In areas with higher regions of sand dunes, such as Israel, the use of seawater for irrigation of plants would eliminate substantial costs associated with fresh water when it is not easily accessible. Although it is not typical to use salt water as a means to grow plants as the salt gathers and ruins the surrounding soil, it has been proven to be successful in sand and gravel soils. Large-scale desalination of seawater is another factor that would contribute to the success of agriculture farming in dry, desert environments. One of the most successful plants in salt water agriculture is the halophyte. The halophyte is a salt tolerant plant whose cells are resistant to the typically detrimental effects of salt in soil. The endodermis forces a higher level of salt filtration throughout the plant as it allows for the circulation of more water through the cells. The cultivation of halophytes irrigated with salt water were used to grow animal feed for livestock; however, the animals that were fed these plants consumed more water than those that did not. Although agriculture from use of saltwater is still not recognized and used on a large scale, initial research has shown that there could be an opportunity to provide more crops in regions where agricultural farming is not usually feasible.
Human consumption
Accidentally consuming small quantities of clean seawater is not harmful, especially if the seawater is taken along with a larger quantity of fresh water. However, drinking seawater to maintain hydration is counterproductive; more water must be excreted to eliminate the salt (via urine) than the amount of water obtained from the seawater itself. In normal circumstances, it would be considered ill-advised to consume large amounts of unfiltered seawater.
The renal system actively regulates the levels of sodium and chloride in the blood within a very narrow range around 9 g/L (0.9% by mass).
In most open waters concentrations vary somewhat around typical values of about 3.5%, far higher than the body can tolerate and most beyond what the kidney can process. A point frequently overlooked in claims that the kidney can excrete NaCl in Baltic concentrations of 2% (in arguments to the contrary) is that the gut cannot absorb water at such concentrations, so that there is no benefit in drinking such water. The salinity of Baltic surface water, however, is never 2%. It is 0.9% or less, and thus never higher than that of bodily fluids. Drinking seawater temporarily increases blood's NaCl concentration. This signals the kidney to excrete sodium, but seawater's sodium concentration is above the kidney's maximum concentrating ability. Eventually the blood's sodium concentration rises to toxic levels, removing water from cells and interfering with nerve conduction, ultimately producing fatal seizure and cardiac arrhythmia.
Survival manuals consistently advise against drinking seawater. A summary of 163 life raft voyages estimated the risk of death at 39% for those who drank seawater, compared to 3% for those who did not. The effect of seawater intake on rats confirmed the negative effects of drinking seawater when dehydrated.
The temptation to drink seawater was greatest for sailors who had expended their supply of fresh water and were unable to capture enough rainwater for drinking. This frustration was described famously by a line from Samuel Taylor Coleridge's The Rime of the Ancient Mariner:
Although humans cannot survive on seawater in place of normal drinking water, some people claim that up to two cups a day, mixed with fresh water in a 2:3 ratio, produces no ill effect. The French physician Alain Bombard survived an ocean crossing in a small Zodiak rubber boat using mainly raw fish meat, which contains about 40% water (like most living tissues), as well as small amounts of seawater and other provisions harvested from the ocean. His findings were challenged, but an alternative explanation could not be given. In his 1948 book The Kon-Tiki Expedition, Thor Heyerdahl reported drinking seawater mixed with fresh in a 2:3 ratio during the 1947 expedition. A few years later, another adventurer, William Willis, claimed to have drunk two cups of seawater and one cup of fresh per day for 70 days without ill effect when he lost part of his water supply.
During the 18th century, Richard Russell advocated the medical use of this practice in the UK, and René Quinton expanded the advocation of this practice to other countries, notably France, in the 20th century. Currently, it is widely practiced in Nicaragua and other countries, supposedly taking advantage of the latest medical discoveries.
Purification
Like any other type of raw or contaminated water, seawater can be evaporated or filtered to eliminate salt, germs, and other contaminants that would otherwise prevent it from being considered potable. Most oceangoing vessels desalinate potable water from seawater using processes such as vacuum distillation or multi-stage flash distillation in an evaporator, or, more recently, reverse osmosis. These energy-intensive processes were not usually available during the Age of Sail. Larger sailing warships with large crews, such as Nelson's , were fitted with distilling apparatus in their galleys.
The natural sea salt obtained by evaporating seawater can also be collected and sold as table salt, typically sold separately owing to its unique mineral make-up compared to rock salt or other sources.
A number of regional cuisines across the world traditionally incorporate seawater directly as an ingredient, cooking other ingredients in a diluted solution of filtered seawater as a substitute for conventional dry seasonings. Proponents include world-renowned chefs Ferran Adrià and Quique Dacosta, whose home country of Spain has six different companies sourcing filtered seawater for culinary use. The water is marketed as , "the perfect salt", containing less sodium with what is considered a superior taste. A restaurant run by Joaquín Baeza sources as much as 60,000 litres a month from supplier Mediterranea
Animals such as fish, whales, sea turtles, and seabirds, such as penguins and albatrosses, have adapted to living in a high-saline habitat. For example, sea turtles and saltwater crocodiles remove excess salt from their bodies through their tear ducts.
Mineral extraction
Minerals have been extracted from seawater since ancient times. Currently the four most concentrated metals – Na, Mg, Ca and K – are commercially extracted from seawater. During 2015 in the US 63% of magnesium production came from seawater and brines. Bromine is also produced from seawater in China and Japan. Lithium extraction from seawater was tried in the 1970s, but the tests were soon abandoned. The idea of extracting uranium from seawater has been considered at least from the 1960s, but only a few grams of uranium were extracted in Japan in the late 1990s. The main issue is not one of technological feasibility but that current prices on the uranium market for uranium from other sources are about three to five times lower than the lowest price achieved by seawater extraction. Similar issues hamper the use of reprocessed uranium and are often brought forth against nuclear reprocessing and the manufacturing of MOX fuel as economically unviable.
The future of mineral and element extractions
In order for seawater mineral and element extractions to take place while taking close consideration of sustainable practices, it is necessary for monitored management systems to be put in place. This requires management of ocean areas and their conditions, environmental planning, structured guidelines to ensure that extractions are controlled, regular assessments of the condition of the sea post-extraction, and constant monitoring. The use of technology, such as underwater drones, can facilitate sustainable extractions. The use of low-carbon infrastructure would also allow for more sustainable extraction processes while reducing the carbon footprint from mineral extractions.
Another practice that is being considered closely is the process of desalination in order to achieve a more sustainable water supply from seawater. Although desalination also comes with environmental concerns, such as costs and resources, researchers are working closely to determine more sustainable practices, such as creating more productive water plants that can deal with larger water supplies in areas where these plans weren't always available. Although seawater extractions can benefit society greatly, it is crucial to consider the environmental impact and to ensure that all extractions are conducted in a way that acknowledges and considers the associated risks to the sustainability of seawater ecosystems.
Standard
ASTM International has an international standard for artificial seawater: ASTM D1141-98 (Original Standard ASTM D1141-52). It is used in many research testing labs as a reproducible solution for seawater such as tests on corrosion, oil contamination, and detergency evaluation.
Ecosystems
The minerals found in seawater can also play an important role in the ocean and its ecosystem's food cycle. For example, the Southern Ocean contributes greatly to the environmental carbon cycle. Given that this body of water does not contain high levels of iron, the deficiency impacts the marine life living in its waters. As a result, this ocean is not able to produce as much phytoplankton which hinders the first source of the marine food chain. One of the main types of phytoplankton are diatoms which is the primary food source of Antarctic krill. As the cycle continues, various larger sea animals feed off of Antarctic krill, but since there is a shortage of iron from the initial phytoplankton/diatoms, then these larger species also lack iron. The larger sea animals include Baleen Whales such as the Blue Whale and Fin Whale. These whales not only rely on iron for a balance of minerals within their diet, but it also impacts the amount of iron that is regenerated back into the ocean. The whale's excretions also contain the absorbed iron which would allow iron to be reinserted into the ocean’s ecosystem. Overall, one mineral deficiency such as iron in the Southern Ocean can spark a significant chain of disturbances within the marine ecosystems which demonstrates the important role that seawater plays in the food chain.
Upon further analysis of the dynamic relationship between diatoms, krill, and baleen whales, fecal samples of baleen whales were examined in Antarctic seawater. The findings included that iron concentrations were 10 million times higher than those found in Antarctic seawater, and krill was found consistently throughout their feces which is an indicator that krill is in whale diets. Antarctic krill had an average iron level of 174.3mg/kg dry weight, but the iron in the krill varied from 12 to 174 mg/kg dry weight. The average iron concentration of the muscular tissue of blue whales and fin whales was 173 mg/kg dry weight, which demonstrates that the large marine mammals are important to marine ecosystems such as they are to the Southern Ocean. In fact, to have more whales in the ocean could heighten the amount of iron in seawater through their excretions which would promote a better ecosystem.
Krill and baleen whales act as large iron reservoirs in seawater in the Southern Ocean. Krill can retain up to 24% of iron found on surface waters within its range.The process of krill feeding on diatoms releases iron into seawater, highlighting them as an important part of the ocean's iron cycle. The advantageous relationship between krill and baleen whales increases the amount of iron that can be recycled and stored in seawater. A positive feedback loop is created, increasing the overall productivity of marine life in the Southern Ocean.
Organisms of all sizes play a significant role in the balance of marine ecosystems with both the largest and smallest inhabitants contributing equally to recycling nutrients in seawater. Prioritizing the recovery of whale populations because they boost the overall productivity in marine ecosystems as well as increasing iron levels in seawater would allow for a balanced and productive system for the ocean. However, a more in depth study is required to understand the benefits of whale feces as a fertilizer and to provide further insight in iron recycling in the Southern Ocean. Projects on the management of ecosystems and conservation are vital for advancing knowledge of marine ecology.
Environmental impact and sustainability
Like any mineral extraction practices, there are environmental advantages and disadvantages. Cobalt and Lithium are two key metals that can be used for aiding with more environmentally friendly technologies above ground, such as powering batteries that energize electric vehicles or creating wind power. An environmentally friendly approach to mining that allows for more sustainability would be to extract these metals from the seafloor. Lithium mining from the seafloor at mass quantities could provide a substantial amount of renewable metals to promote more environmentally friendly practices in society to reduce humans' carbon footprint. Lithium mining from the seafloor could be successful, but its success would be dependent on more productive recycling practices above ground.
There are also risks that come with extracting from the seafloor. Many biodiverse species have long lifespans on the seafloor, which means that their reproduction takes more time. Similarly to fish harvesting from the seafloor, the extraction of minerals in large amounts, too quickly, without proper protocols, can result in a disruption of the underwater ecosystems. Contrarily, this would have the opposite effect and prevent mineral extractions from being a long-term sustainable practice, and would result in a shortage of required metals. Any seawater mineral extractions also risk disrupting the habitat of the underwater life that is dependent on the uninterrupted ecosystem within their environment as disturbances can have significant disturbances on animal communities.
See also
global ocean salinity
References
External links
Technical Papers in Marine Science 44, Algorithms for computation of fundamental properties of seawater, ioc-unesco.org, UNESCO 1983
Tables
Tables and software for thermophysical properties of seawater, MIT
Aquatic ecology
Chemical oceanography
Liquid water
Physical oceanography
Oceanographical terminology | Seawater | [
"Physics",
"Chemistry",
"Biology"
] | 5,585 | [
"Applied and interdisciplinary physics",
"Chemical oceanography",
"Ecosystems",
"Physical oceanography",
"Aquatic ecology"
] |
255,446 | https://en.wikipedia.org/wiki/Thermodynamic%20potential | A thermodynamic potential (or more accurately, a thermodynamic potential energy) is a scalar quantity used to represent the thermodynamic state of a system. Just as in mechanics, where potential energy is defined as capacity to do work, similarly different potentials have different meanings. The concept of thermodynamic potentials was introduced by Pierre Duhem in 1886. Josiah Willard Gibbs in his papers used the term fundamental functions. While thermodynamic potentials cannot be measured directly, they can be predicted using computational chemistry.
One main thermodynamic potential that has a physical interpretation is the internal energy . It is the energy of configuration of a given system of conservative forces (that is why it is called potential) and only has meaning with respect to a defined set of references (or data). Expressions for all other thermodynamic energy potentials are derivable via Legendre transforms from an expression for . In other words, each thermodynamic potential is equivalent to other thermodynamic potentials; each potential is a different expression of the others.
In thermodynamics, external forces, such as gravity, are counted as contributing to total energy rather than to thermodynamic potentials. For example, the working fluid in a steam engine sitting on top of Mount Everest has higher total energy due to gravity than it has at the bottom of the Mariana Trench, but the same thermodynamic potentials. This is because the gravitational potential energy belongs to the total energy rather than to thermodynamic potentials such as internal energy.
Description and interpretation
Five common thermodynamic potentials are:
where = temperature, = entropy, = pressure, = volume. is the number of particles of type in the system and is the chemical potential for an -type particle. The set of all are also included as natural variables but may be ignored when no chemical reactions are occurring which cause them to change. The Helmholtz free energy is in ISO/IEC standard called Helmholtz energy or Helmholtz function. It is often denoted by the symbol , but the use of is preferred by IUPAC, ISO and IEC.
These five common potentials are all potential energies, but there are also entropy potentials. The thermodynamic square can be used as a tool to recall and derive some of the potentials.
Just as in mechanics, where potential energy is defined as capacity to do work, similarly different potentials have different meanings like the below:
Internal energy () is the capacity to do work plus the capacity to release heat.
Gibbs energy () is the capacity to do non-mechanical work.
Enthalpy () is the capacity to do non-mechanical work plus the capacity to release heat.
Helmholtz energy () is the capacity to do mechanical work plus non-mechanical work.
From these meanings (which actually apply in specific conditions, e.g. constant pressure, temperature, etc.), for positive changes (e.g., ), we can say that is the energy added to the system, is the total work done on it, is the non-mechanical work done on it, and is the sum of non-mechanical work done on the system and the heat given to it.
Note that the sum of internal energy is conserved, but the sum of Gibbs energy, or Helmholtz energy, are not conserved, despite being named "energy". They can be better interpreted as the potential to perform "useful work", and the potential can be wasted.
Thermodynamic potentials are very useful when calculating the equilibrium results of a chemical reaction, or when measuring the properties of materials in a chemical reaction. The chemical reactions usually take place under some constraints such as constant pressure and temperature, or constant entropy and volume, and when this is true, there is a corresponding thermodynamic potential that comes into play. Just as in mechanics, the system will tend towards a lower value of a potential and at equilibrium, under these constraints, the potential will take the unchanging minimum value. The thermodynamic potentials can also be used to estimate the total amount of energy available from a thermodynamic system under the appropriate constraint.
In particular: (see principle of minimum energy for a derivation)
When the entropy and "external parameters" (e.g. volume) of a closed system are held constant, the internal energy decreases and reaches a minimum value at equilibrium. This follows from the first and second laws of thermodynamics and is called the principle of minimum energy. The following three statements are directly derivable from this principle.
When the temperature and external parameters of a closed system are held constant, the Helmholtz free energy decreases and reaches a minimum value at equilibrium.
When the pressure and external parameters of a closed system are held constant, the enthalpy decreases and reaches a minimum value at equilibrium.
When the temperature , pressure and external parameters of a closed system are held constant, the Gibbs free energy decreases and reaches a minimum value at equilibrium.
Natural variables
For each thermodynamic potential, there are thermodynamic variables that need to be held constant to specify the potential value at a thermodynamical equilibrium state, such as independent variables for a mathematical function. These variables are termed the natural variables of that potential. The natural variables are important not only to specify the potential value at the equilibrium, but also because if a thermodynamic potential can be determined as a function of its natural variables, all of the thermodynamic properties of the system can be found by taking partial derivatives of that potential with respect to its natural variables and this is true for no other combination of variables. If a thermodynamic potential is not given as a function of its natural variables, it will not, in general, yield all of the thermodynamic properties of the system.
The set of natural variables for each of the above four thermodynamic potentials is formed from a combination of the , , , variables, excluding any pairs of conjugate variables; there is no natural variable set for a potential including the - or - variables together as conjugate variables for energy. An exception for this rule is the − conjugate pairs as there is no reason to ignore these in the thermodynamic potentials, and in fact we may additionally define the four potentials for each species. Using IUPAC notation in which the brackets contain the natural variables (other than the main four), we have:
If there is only one species, then we are done. But, if there are, say, two species, then there will be additional potentials such as and so on. If there are dimensions to the thermodynamic space, then there are unique thermodynamic potentials. For the most simple case, a single phase ideal gas, there will be three dimensions, yielding eight thermodynamic potentials.
Fundamental equations
The definitions of the thermodynamic potentials may be differentiated and, along with the first and second laws of thermodynamics, a set of differential equations known as the fundamental equations follow. (Actually they are all expressions of the same fundamental thermodynamic relation, but are expressed in different variables.) By the first law of thermodynamics, any differential change in the internal energy of a system can be written as the sum of heat flowing into the system subtracted by the work done by the system on the environment, along with any change due to the addition of new particles to the system:
where is the infinitesimal heat flow into the system, and is the infinitesimal work done by the system, is the chemical potential of particle type and is the number of the type particles. (Neither nor are exact differentials, i.e., they are thermodynamic process path-dependent. Small changes in these variables are, therefore, represented with rather than .)
By the second law of thermodynamics, we can express the internal energy change in terms of state functions and their differentials. In case of reversible changes we have:
where
is temperature,
is entropy,
is pressure,
and is volume, and the equality holds for reversible processes.
This leads to the standard differential form of the internal energy in case of a quasistatic reversible change:
Since , and are thermodynamic functions of state (also called state functions), the above relation also holds for arbitrary non-reversible changes. If the system has more external variables than just the volume that can change, the fundamental thermodynamic relation generalizes to:
Here the are the generalized forces corresponding to the external variables .
Applying Legendre transforms repeatedly, the following differential relations hold for the four potentials (fundamental thermodynamic equations or fundamental thermodynamic relation):
The infinitesimals on the right-hand side of each of the above equations are of the natural variables of the potential on the left-hand side. Similar equations can be developed for all of the other thermodynamic potentials of the system. There will be one fundamental equation for each thermodynamic potential, resulting in a total of fundamental equations.
The differences between the four thermodynamic potentials can be summarized as follows:
Equations of state
We can use the above equations to derive some differential definitions of some thermodynamic parameters. If we define to stand for any of the thermodynamic potentials, then the above equations are of the form:
where and are conjugate pairs, and the are the natural variables of the potential . From the chain rule it follows that:
where is the set of all natural variables of except that are held as constants. This yields expressions for various thermodynamic parameters in terms of the derivatives of the potentials with respect to their natural variables. These equations are known as equations of state since they specify parameters of the thermodynamic state. If we restrict ourselves to the potentials (Internal energy), (Helmholtz energy), (Enthalpy) and (Gibbs energy), then we have the following equations of state (subscripts showing natural variables that are held as constants):
where, in the last equation, is any of the thermodynamic potentials (, , , or ), and are the set of natural variables for that potential, excluding . If we use all thermodynamic potentials, then we will have more equations of state such as
and so on. In all, if the thermodynamic space is dimensions, then there will be equations for each potential, resulting in a total of equations of state because thermodynamic potentials exist. If the equations of state for a particular potential are known, then the fundamental equation for that potential (i.e., the exact differential of the thermodynamic potential) can be determined. This means that all thermodynamic information about the system will be known because the fundamental equations for any other potential can be found via the Legendre transforms and the corresponding equations of state for each potential as partial derivatives of the potential can also be found.
Measurement of thermodynamic potentials
The above equations of state suggest methods to experimentally measure changes in the thermodynamic potentials using physically measurable parameters. For example the free energy expressions
and
can be integrated at constant temperature and quantities to obtain:
(at constant T, {Nj} )
(at constant T, {Nj} )
which can be measured by monitoring the measurable variables of pressure, temperature and volume. Changes in the enthalpy and internal energy can be measured by calorimetry (which measures the amount of heat ΔQ released or absorbed by a system). The expressions
can be integrated:
(at constant P, {Nj} )
(at constant V, {Nj} )
Note that these measurements are made at constant {Nj } and are therefore not applicable to situations in which chemical reactions take place.
Maxwell relations
Again, define and to be conjugate pairs, and the to be the natural variables of some potential . We may take the "cross differentials" of the state equations, which obey the following relationship:
From these we get the Maxwell relations. There will be of them for each potential giving a total of equations in all. If we restrict ourselves the , , ,
Using the equations of state involving the chemical potential we get equations such as:
and using the other potentials we can get equations such as:
Euler relations
Again, define and to be conjugate pairs, and the to be the natural variables of the internal energy.
Since all of the natural variables of the internal energy are extensive quantities
it follows from Euler's homogeneous function theorem that the internal energy can be written as:
From the equations of state, we then have:
This formula is known as an Euler relation, because Euler's theorem on homogeneous functions leads to it. (It was not discovered by Euler in an investigation of thermodynamics, which did not exist in his day.).
Substituting into the expressions for the other main potentials we have:
As in the above sections, this process can be carried out on all of the other thermodynamic potentials. Thus, there is another Euler relation, based on the expression of entropy as a function of internal energy and other extensive variables. Yet other Euler relations hold for other fundamental equations for energy or entropy, as respective functions of other state variables including some intensive state variables.
Gibbs–Duhem relation
Deriving the Gibbs–Duhem equation from basic thermodynamic state equations is straightforward. Equating any thermodynamic potential definition with its Euler relation expression yields:
Differentiating, and using the second law:
yields:
Which is the Gibbs–Duhem relation. The Gibbs–Duhem is a relationship among the intensive parameters of the system. It follows that for a simple system with components, there will be independent parameters, or degrees of freedom. For example, a simple system with a single component will have two degrees of freedom, and may be specified by only two parameters, such as pressure and volume for example. The law is named after Josiah Willard Gibbs and Pierre Duhem.
Stability conditions
As the internal energy is a convex function of entropy and volume, the stability condition requires that the second derivative of internal energy with entropy or volume to be positive. It is commonly expressed as . Since the maximum principle of entropy is equivalent to minimum principle of internal energy, the combined criteria for stability or thermodynamic equilibrium is expressed as and for parameters, entropy and volume. This is analogous to and condition for entropy at equilibrium. The same concept can be applied to the various thermodynamic potentials by identifying if they are convex or concave of respective their variables.
and
Where Helmholtz energy is a concave function of temperature and convex function of volume.
and
Where enthalpy is a concave function of pressure and convex function of entropy.
and
Where Gibbs potential is a concave function of both pressure and temperature.
In general the thermodynamic potentials (the internal energy and its Legendre transforms), are convex functions of their extrinsic variables and concave functions of intrinsic variables. The stability conditions impose that isothermal compressibility is positive and that for non-negative temperature, .
Chemical reactions
Changes in these quantities are useful for assessing the degree to which a chemical reaction will proceed. The relevant quantity depends on the reaction conditions, as shown in the following table. denotes the change in the potential and at equilibrium the change will be zero.
Most commonly one considers reactions at constant and , so the Gibbs free energy is the most useful potential in studies of chemical reactions.
See also
Coomber's relationship
Notes
References
Further reading
McGraw Hill Encyclopaedia of Physics (2nd Edition), C.B. Parker, 1994,
Thermodynamics, From Concepts to Applications (2nd Edition), A. Shavit, C. Gutfinger, CRC Press (Taylor and Francis Group, USA), 2009,
Chemical Thermodynamics, D.J.G. Ives, University Chemistry, Macdonald Technical and Scientific, 1971,
Elements of Statistical Thermodynamics (2nd Edition), L.K. Nash, Principles of Chemistry, Addison-Wesley, 1974,
Statistical Physics (2nd Edition), F. Mandl, Manchester Physics, John Wiley & Sons, 2008,
External links
Thermodynamic Potentials – Georgia State University
Chemical Potential Energy: The 'Characteristic' vs the Concentration-Dependent Kind
Thermodynamics
Potentials
Thermodynamic equations | Thermodynamic potential | [
"Physics",
"Chemistry",
"Mathematics"
] | 3,467 | [
"Thermodynamic equations",
"Equations of physics",
"Thermodynamics",
"Dynamical systems"
] |
255,447 | https://en.wikipedia.org/wiki/Helmholtz%20free%20energy | In thermodynamics, the Helmholtz free energy (or Helmholtz energy) is a thermodynamic potential that measures the useful work obtainable from a closed thermodynamic system at a constant temperature (isothermal). The change in the Helmholtz energy during a process is equal to the maximum amount of work that the system can perform in a thermodynamic process in which temperature is held constant. At constant temperature, the Helmholtz free energy is minimized at equilibrium.
In contrast, the Gibbs free energy or free enthalpy is most commonly used as a measure of thermodynamic potential (especially in chemistry) when it is convenient for applications that occur at constant pressure. For example, in explosives research Helmholtz free energy is often used, since explosive reactions by their nature induce pressure changes. It is also frequently used to define fundamental equations of state of pure substances.
The concept of free energy was developed by Hermann von Helmholtz, a German physicist, and first presented in 1882 in a lecture called "On the thermodynamics of chemical processes". From the German word Arbeit (work), the International Union of Pure and Applied Chemistry (IUPAC) recommends the symbol A and the name Helmholtz energy. In physics, the symbol F is also used in reference to free energy or Helmholtz function.
Definition
The Helmholtz free energy is defined as
where
F is the Helmholtz free energy (sometimes also called A, particularly in the field of chemistry) (SI: joules, CGS: ergs),
U is the internal energy of the system (SI: joules, CGS: ergs),
T is the absolute temperature (kelvins) of the surroundings, modelled as a heat bath,
S is the entropy of the system (SI: joules per kelvin, CGS: ergs per kelvin).
The Helmholtz energy is the Legendre transformation of the internal energy U, in which temperature replaces entropy as the independent variable.
Formal development
The first law of thermodynamics in a closed system provides
where is the internal energy, is the energy added as heat, and is the work done on the system. The second law of thermodynamics for a reversible process yields . In case of a reversible change, the work done can be expressed as (ignoring electrical and other non-PV work) and so:
Applying the product rule for differentiation to , it follows
and
The definition of allows us to rewrite this as
Because F is a thermodynamic function of state, this relation is also valid for a process (without electrical work or composition change) that is not reversible.
Minimum free energy and maximum work principles
The laws of thermodynamics are only directly applicable to systems in thermal equilibrium. If we wish to describe phenomena like chemical reactions, then the best we can do is to consider suitably chosen initial and final states in which the system is in (metastable) thermal equilibrium. If the system is kept at fixed volume and is in contact with a heat bath at some constant temperature, then we can reason as follows.
Since the thermodynamical variables of the system are well defined in the initial state and the final state, the internal energy increase , the entropy increase , and the total amount of work that can be extracted, performed by the system, , are well defined quantities. Conservation of energy implies
The volume of the system is kept constant. This means that the volume of the heat bath does not change either, and we can conclude that the heat bath does not perform any work. This implies that the amount of heat that flows into the heat bath is given by
The heat bath remains in thermal equilibrium at temperature T no matter what the system does. Therefore, the entropy change of the heat bath is
The total entropy change is thus given by
Since the system is in thermal equilibrium with the heat bath in the initial and the final states, T is also the temperature of the system in these states. The fact that the system's temperature does not change allows us to express the numerator as the free energy change of the system:
Since the total change in entropy must always be larger or equal to zero, we obtain the inequality
We see that the total amount of work that can be extracted in an isothermal process is limited by the free-energy decrease, and that increasing the free energy in a reversible process requires work to be done on the system. If no work is extracted from the system, then
and thus for a system kept at constant temperature and volume and not capable of performing electrical or other non-PV work, the total free energy during a spontaneous change can only decrease.
This result seems to contradict the equation dF = −S dT − P dV, as keeping T and V constant seems to imply dF = 0, and hence F = constant. In reality there is no contradiction: In a simple one-component system, to which the validity of the equation dF = −S dT − P dV is restricted, no process can occur at constant T and V, since there is a unique P(T, V) relation, and thus T, V, and P are all fixed. To allow for spontaneous processes at constant T and V, one needs to enlarge the thermodynamical state space of the system. In case of a chemical reaction, one must allow for changes in the numbers Nj of particles of each type j. The differential of the free energy then generalizes to
where the are the numbers of particles of type j and the are the corresponding chemical potentials. This equation is then again valid for both reversible and non-reversible changes. In case of a spontaneous change at constant T and V, the last term will thus be negative.
In case there are other external parameters, the above relation further generalizes to
Here the are the external variables, and the the corresponding generalized forces.
Relation to the canonical partition function
A system kept at constant volume, temperature, and particle number is described by the canonical ensemble. The probability of finding the system in some energy eigenstate r, for any microstate i, is given by
where
is the energy of accessible state
Z is called the partition function of the system. The fact that the system does not have a unique energy means that the various thermodynamical quantities must be defined as expectation values. In the thermodynamical limit of infinite system size, the relative fluctuations in these averages will go to zero.
The average internal energy of the system is the expectation value of the energy and can be expressed in terms of Z as follows:
If the system is in state r, then the generalized force corresponding to an external variable x is given by
The thermal average of this can be written as
Suppose that the system has one external variable . Then changing the system's temperature parameter by and the external variable by will lead to a change in :
If we write as
we get
This means that the change in the internal energy is given by
In the thermodynamic limit, the fundamental thermodynamic relation should hold:
This then implies that the entropy of the system is given by
where c is some constant. The value of c can be determined by considering the limit T → 0. In this limit the entropy becomes , where is the ground-state degeneracy. The partition function in this limit is , where is the ground-state energy. Thus, we see that and that
Relating free energy to other variables
Combining the definition of Helmholtz free energy
along with the fundamental thermodynamic relation
one can find expressions for entropy, pressure and chemical potential:
These three equations, along with the free energy in terms of the partition function,
allow an efficient way of calculating thermodynamic variables of interest given the partition function and are often used in density of state calculations. One can also do Legendre transformations for different systems. For example, for a system with a magnetic field or potential, it is true that
Bogoliubov inequality
Computing the free energy is an intractable problem for all but the simplest models in statistical physics. A powerful approximation method is mean-field theory, which is a variational method based on the Bogoliubov inequality. This inequality can be formulated as follows.
Suppose we replace the real Hamiltonian of the model by a trial Hamiltonian , which has different interactions and may depend on extra parameters that are not present in the original model. If we choose this trial Hamiltonian such that
where both averages are taken with respect to the canonical distribution defined by the trial Hamiltonian , then the Bogoliubov inequality states
where is the free energy of the original Hamiltonian, and is the free energy of the trial Hamiltonian. We will prove this below.
By including a large number of parameters in the trial Hamiltonian and minimizing the free energy, we can expect to get a close approximation to the exact free energy.
The Bogoliubov inequality is often applied in the following way. If we write the Hamiltonian as
where is some exactly solvable Hamiltonian, then we can apply the above inequality by defining
Here we have defined to be the average of X over the canonical ensemble defined by . Since defined this way differs from by a constant, we have in general
where is still the average over , as specified above. Therefore,
and thus the inequality
holds. The free energy is the free energy of the model defined by plus . This means that
and thus
Proof of the Bogoliubov inequality
For a classical model we can prove the Bogoliubov inequality as follows. We denote the canonical probability distributions for the Hamiltonian and the trial Hamiltonian by and , respectively. From Gibbs' inequality we know that:
holds. To see this, consider the difference between the left hand side and the right hand side. We can write this as:
Since
it follows that:
where in the last step we have used that both probability distributions are normalized to 1.
We can write the inequality as:
where the averages are taken with respect to . If we now substitute in here the expressions for the probability distributions:
and
we get:
Since the averages of and are, by assumption, identical we have:
Here we have used that the partition functions are constants with respect to taking averages and that the free energy is proportional to minus the logarithm of the partition function.
We can easily generalize this proof to the case of quantum mechanical models. We denote the eigenstates of by . We denote the diagonal components of the density matrices for the canonical distributions for and in this basis as:
and
where the are the eigenvalues of
We assume again that the averages of H and in the canonical ensemble defined by are the same:
where
The inequality
still holds as both the and the sum to 1. On the l.h.s. we can replace:
On the right-hand side we can use the inequality
where we have introduced the notation
for the expectation value of the operator Y in the state r. See here for a proof. Taking the logarithm of this inequality gives:
This allows us to write:
The fact that the averages of H and are the same then leads to the same conclusion as in the classical case:
Generalized Helmholtz energy
In the more general case, the mechanical term must be replaced by the product of volume, stress, and an infinitesimal strain:
where is the stress tensor, and is the strain tensor. In the case of linear elastic materials that obey Hooke's law, the stress is related to the strain by
where we are now using Einstein notation for the tensors, in which repeated indices in a product are summed. We may integrate the expression for to obtain the Helmholtz energy:
Application to fundamental equations of state
The Helmholtz free energy function for a pure substance (together with its partial derivatives) can be used to determine all other thermodynamic properties for the substance. See, for example, the equations of state for water, as given by the IAPWS in their IAPWS-95 release.
Application to training auto-encoders
Hinton and Zemel "derive an objective function for training auto-encoder based on the minimum description length (MDL) principle". "The description length of an input vector using a particular code is the sum of the code cost and reconstruction cost. They define this to be the energy of the code. Given an input vector, they define the energy of a code to be the sum of the code cost and the reconstruction cost." The true expected combined cost is
"which has exactly the form of Helmholtz free energy".
See also
Gibbs free energy and thermodynamic free energy for thermodynamics history overview and discussion of free energy
Grand potential
Enthalpy
Statistical mechanics
This page details the Helmholtz energy from the point of view of thermal and statistical physics.
Bennett acceptance ratio for an efficient way to calculate free energy differences and comparison with other methods.
References
Further reading
Atkins' Physical Chemistry, 7th edition, by Peter Atkins and Julio de Paula, Oxford University Press
HyperPhysics Helmholtz Free Energy Helmholtz and Gibbs Free Energies
Physical quantities
Hermann von Helmholtz
State functions
Thermodynamic free energy | Helmholtz free energy | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,731 | [
"State functions",
"Thermodynamic properties",
"Physical phenomena",
"Physical quantities",
"Quantity",
"Thermodynamic free energy",
"Energy (physics)",
"Wikipedia categories named after physical quantities",
"Physical properties"
] |
2,567,355 | https://en.wikipedia.org/wiki/Minatec | Minatec (initially called the Micro and Nanotechnology Innovation Centre) is a research complex specializing in micro/nano technologies in Grenoble, France.
The center was inaugurated in June 2006 by François Loos, French Minister Delegate for Industry, as a partnership between LETI (the Electronics and Information Technologies Laboratory of CEA, the French Atomic Energy Commission) and by Grenoble Institute of Technology (Université Grenoble Alpes). The site was already home to LETI, Europe's top center for applied research in microelectronics and nanotechnology. Minatec combines a physical research campus with a network of companies, researchers, and engineering schools. It was launched to foster technology transfer, with applications in energy and communications.
The complex is home to 3,000 researchers, 1,200 students, and 600 technology transfer experts on a 20-hectare campus offering 10,000 square meters for cleanroom space. It offers a continuum that includes student technology transfer, industry, and applied research.
The Minatec campus has dedicated special-events facilities (900 m²), including a 20-person conference rooms and a 400-seat amphitheater. These spaces are available to researchers for their scientific events such as the international conference held every two years.
Minatec includes fundamental research labs like INAC and FMNT, plus a major technological research lab, Leti. MINATEC also cooperates with the INSTITUT NÉEL and RTRA, which are located nearby.
Funding
Minatec represents an investment of 193.5 million euros between 2002 and 2005, mainly paid by local authorities and the CEA.
See also
Polygone Scientifique
References
External links
Research institutes in France
Microtechnology
Nanotechnology institutions
Educational institutions in Grenoble
Grenoble Institute of Technology
Science and technology in Grenoble
Organizations established in 2006
2006 establishments in France | Minatec | [
"Materials_science",
"Engineering"
] | 376 | [
"Nanotechnology",
"Nanotechnology institutions",
"Materials science",
"Microtechnology"
] |
2,568,484 | https://en.wikipedia.org/wiki/Geon%20%28physics%29 | In general relativity, a geon is a nonsingular electromagnetic or gravitational wave which is held together in a confined region by the gravitational attraction of its own field energy. They were first investigated theoretically in 1955 by J. A. Wheeler, who coined the term as a contraction of "gravitational electromagnetic entity".
Overview
Since general relativity is a classical field theory, Wheeler's concept of a geon does not treat them as quantum-mechanical entities, and this generally remains true today. Nonetheless, Wheeler speculated that there might be a relationship between geons and elementary particles. This idea continues to attract some attention among physicists, but in the absence of a viable theory of quantum gravity, the accuracy of this speculative idea cannot be tested.
Wheeler did not present explicit geon solutions to the vacuum Einstein field equation, a gap which was partially filled by Dieter R. Brill and James Hartle in 1964 by the Brill–Hartle geon. In 1997, Anderson and Brill gave a rigorous proof that geon solutions of the vacuum Einstein equation exist, though they are not given in a simple closed form.
A major outstanding question regarding geons is whether they are stable, or must decay over time as the energy of the wave gradually "leaks" away. This question has not yet been definitively answered, but the consensus seems to be that they probably cannot be stable. This would lay to rest Wheeler's initial hope that a geon might serve as a classical model for stable elementary particles. However, this would not rule out the possibility that geons are stabilized by quantum effects. In fact, a quantum generalization of the gravitational geon using low-energy quantum gravity shows that geons are stable systems even when quantum effects are turned on. The quantum geon (called "graviball") is described as gravitons bound by their gravitational self-interaction. Since geons (classical or quantum) have a mass but are electromagnetically neutral, they are possible candidates for dark matter.
See also
Black hole electron
Edwin Power
Geometrodynamics
Kugelblitz
Quantum foam
References
Further reading
General relativity
Quantum gravity
Black_holes | Geon (physics) | [
"Physics",
"Astronomy"
] | 434 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"General relativity",
"Quantum gravity",
"Density",
"Theory of relativity",
"Stellar phenomena",
"Astronomical objects",
"Physics beyond the Standard Model"
] |
2,568,551 | https://en.wikipedia.org/wiki/Brannock%20Device | The Brannock Device is a measuring instrument invented by Charles F. Brannock for measuring a person's shoe size. Brannock spent two years developing a simple means of measuring the length, width, and arch length of the human foot. He eventually improved on the wooden RITZ Stick, the industry standard of the day, patenting his first prototype in 1925 and an improved version in 1927. The device has both left and right heel cups and is rotated through 180 degrees to measure the second foot. Brannock later formed the Brannock Device Company to manufacture and sell the product, and headed the company until 1992 when he died at age 89. The Smithsonian Institution has the nearly complete records of the development of the Brannock Device and subsequent marketing.
The Brannock Device Company was headquartered in Syracuse, New York, until shortly after Charles Brannock's death. Salvatore Leonardi purchased the company from the Brannock Estate in 1993, and moved manufacturing to a small factory in Liverpool, New York.
On May 31, 2018, the Syracuse minor league baseball team had a one-night promotion and rebranded as the Syracuse Devices in honor of the Brannock Device.
Sizing system
The modern Brannock device takes three measurements of each foot:
Foot length the length from heel to the tip of the longest toe (in increments of barleycorns)
Arch length the length from heel to the inside of the ball of the foot, or medial metatarsophalangeal joint
Width the width of the foot perpendicular to the length
Foot and arch lengths correspond to numeric Brannock sizes, and foot widths correspond to letter Brannock widths AAAA (narrowest) to EEEE (widest), as follows:
Women's Brannock sizes are offset from men's by one:
|-
! 16
|-
!
|-
! 17
|-
!
|-
! 18
|-
!
|-
! 19
|-
!
|-
! 20
|-
!
|-
! 21
|-
!
|-
! 22
|-
!
|-
! 23
|-
!
|-
! 24
|-
!
|-
! 25
|-
!
|-
! Heel-to-Toe (Foot) Length !! Heel-to-Ball (Arch) Length !! AAAA !! AAA !! AA !! A !! B !! C !! D !! E !! EE !! EEE !! EEEE
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|
|
|
|
|
References
Bibliography
External links
The Brannock Device Co., Inc.
Charles Brannock: MIT inventor of the Week (August 2001)
Brannock Company history and archives
Brannock Device, an early Design Drawing from the Smithsonian (1920s) Smithsonian Institution Libraries
Dimensional instruments
Shoemaking
Anthropometry
Manufacturing companies based in Syracuse, New York
American inventions | Brannock Device | [
"Physics",
"Mathematics"
] | 1,308 | [
"Quantity",
"Dimensional instruments",
"Physical quantities",
"Size"
] |
2,570,207 | https://en.wikipedia.org/wiki/Bioplastic | Bioplastics are plastic materials produced from renewable biomass sources. Historically, bioplastics made from natural materials like shellac or cellulose had been the first plastics. Since the end of the 19th century they have been increasingly superseded by fossil-fuel plastics derived from petroleum or natural gas (fossilized biomass is not considered to be renewable in reasonable short time). Today, in the context of bioeconomy and circular economy, bioplastics are gaining interest again. Conventional petro-based polymers are increasingly blended with bioplastics to manufacture "bio-attributed" or "mass-balanced" plastic products - so the difference between bio- and other plastics might be difficult to define.
Bioplastics can be produced by:
processing directly from natural biopolymers including polysaccharides (e.g., corn starch or rice starch, cellulose, chitosan, and alginate) and proteins (e.g., soy protein, gluten, and gelatin),
chemical synthesis from sugar derivatives (e.g., lactic acid) and lipids (such as vegetable fats and oils) from either plants or animals,
fermentation of sugars or lipids,
biotechnological production in microorganisms or genetically modified plants (e.g., polyhydroxyalkanoates (PHA).
One advantage of bioplastics is their independence from fossil fuel as a raw material, which is a finite and globally unevenly distributed resource linked to petroleum politics and environmental impacts. Bioplastics can utilize previously unused waste materials (e.g., straw, woodchips, sawdust, and food waste). Life cycle analysis studies show that some bioplastics can be made with a lower carbon footprint than their fossil counterparts, for example when biomass is used as raw material and also for energy production. However, other bioplastics' processes are less efficient and result in a higher carbon footprint than fossil plastics.
Whether any kind of plastic is degradable or non-degradable (durable) depends on its molecular structure, not on whether or not the biomass constituting the raw material is fossilized. Both durable bioplastics, such as Bio-PET or biopolyethylene (bio-based analogues of fossil-based polyethylene terephthalate and polyethylene), and degradable bioplastics, such as polylactic acid, polybutylene succinate, or polyhydroxyalkanoates, exist. Bioplastics must be recycled similar to fossil-based plastics to avoid plastic pollution; "drop-in" bioplastics (such as biopolyethylene) fit into existing recycling streams. On the other hand, recycling biodegradable bioplastics in the current recycling streams poses additional challenges, as it may raise the cost of sorting and decrease the yield and the quality of the recyclate. However, biodegradation is not the only acceptable end-of-life disposal pathway for biodegradable bioplastics, and mechanical and chemical recycling are often the preferred choice from the environmental point of view.
Biodegradability may offer an end-of-life pathway in certain applications, such as agricultural mulch, but the concept of biodegradation is not as straightforward as many believe. Susceptibility to biodegradation is highly dependent on the chemical backbone structure of the polymer, and different bioplastics have different structures, thus it cannot be assumed that bioplastic in the environment will readily disintegrate. Conversely, biodegradable plastics can also be synthesized from fossil fuels.
As of 2018, bioplastics represented approximately 2% of the global plastics output (>380 million tons). In 2022, the commercially most important types of bioplastics were PLA and products based on starch. With continued research on bioplastics, investment in bioplastic companies and rising scrutiny on fossil-based plastics, bioplastics are becoming more dominant in some markets, while the output of fossil plastics also steadily increases.
IUPAC definition
The International Union of Pure and Applied Chemistry define biobased polymer as:
Proposed applications
Few commercial applications exist for bioplastics. Cost and performance remain problematic. Typical is the example of Italy, where biodegradable plastic bags are compulsory for shoppers since 2011 with the introduction of a specific law. Beyond structural materials, electroactive bioplastics are being developed that promise to carry electric current.
Bioplastics are used for disposable items, such as packaging, crockery, cutlery, pots, bowls, and straws.
Biopolymers are available as coatings for paper rather than the more common petrochemical coatings.
Bioplastics called drop-in bioplastics are chemically identical to their fossil-fuel counterparts but made from renewable resources. Examples include bio-PE, bio-PET, bio-propylene, bio-PP, and biobased nylons. Drop-in bioplastics are easy to implement technically, as existing infrastructure can be used. A dedicated bio-based pathway allows to produce products that cannot be obtained through traditional chemical reactions and can create products which have unique and superior properties, compared to fossil-based alternatives.
Types
Polysaccharide-based bioplastics
Starch-based plastics
Thermoplastic starch represents the most widely used bioplastic, constituting about 50 percent of the bioplastics market. Simple starch bioplastic film can be made at home by gelatinizing starch and solution casting. Pure starch is able to absorb humidity, and is thus a suitable material for the production of drug capsules by the pharmaceutical sector. However, pure starch-based bioplastic is brittle. Plasticizer such as glycerol, glycol, and sorbitol can also be added so that the starch can also be processed thermo-plastically. The characteristics of the resulting bioplastic (also called "thermoplastic starch") can be tailored to specific needs by adjusting the amounts of these additives. Conventional polymer processing techniques can be used to process starch into bioplastic, such as extrusion, injection molding, compression molding and solution casting. The properties of starch bioplastic is largely influenced by amylose/amylopectin ratio. Generally, high-amylose starch results in superior mechanical properties. However, high-amylose starch has less processability because of its higher gelatinization temperature and higher melt viscosity.
Starch-based bioplastics are often blended with biodegradable polyesters to produce starch/polylactic acid, starch/polycaprolactone or starch/Ecoflex (polybutylene adipate-co-terephthalate produced by BASF) blends. These blends are used for industrial applications and are also compostable. Other producers, such as Roquette, have developed other starch/polyolefin blends. These blends are not biodegradable, but have a lower carbon footprint than petroleum-based plastics used for the same applications.
Starch is cheap, abundant, and renewable.
Starch-based films (mostly used for packaging purposes) are made mainly from starch blended with thermoplastic polyesters to form biodegradable and compostable products. These films are seen specifically in consumer goods packaging of magazine wrappings and bubble films. In food packaging, these films are seen as bakery or fruit and vegetable bags. Composting bags with this films are used in selective collecting of organic waste. Further, starch-based films can be used as a paper.
Starch-based nanocomposites have been widely studied, showing improved mechanical properties, thermal stability, moisture resistance, and gas barrier properties.
Cellulose-based plastics
Cellulose bioplastics are mainly the cellulose esters (including cellulose acetate and nitrocellulose) and their derivatives, including celluloid.
Cellulose can become thermoplastic when extensively modified. An example of this is cellulose acetate, which is expensive and therefore rarely used for packaging. However, cellulosic fibers added to starches can improve mechanical properties, permeability to gas, and water resistance due to being less hydrophilic than starch.
Protein-based plastics
Bioplastics can be made from proteins from different sources. For example, wheat gluten and casein show promising properties as a raw material for different biodegradable polymers.
Additionally, soy protein is being considered as another source of bioplastic. Soy proteins have been used in plastic production for over one hundred years. For example, body panels of an original Ford automobile were made of soy-based plastic.
There are difficulties with using soy protein-based plastics due to their water sensitivity and relatively high cost. Therefore, producing blends of soy protein with some already-available biodegradable polyesters improves the water sensitivity and cost.
Some aliphatic polyesters
The aliphatic biopolyesters are mainly polyhydroxyalkanoates (PHAs) like the poly-3-hydroxybutyrate (PHB), polyhydroxyvalerate (PHV) and polyhydroxyhexanoate (PHH).
Polylactic acid (PLA)
Polylactic acid (PLA) is a transparent plastic produced from maize or dextrose. Superficially, it is similar to conventional petrochemical-based mass plastics like PS. It is derived from plants, and it biodegrades under industrial composting conditions. Unfortunately, it exhibits inferior impact strength, thermal robustness, and barrier properties (blocking air transport across the membrane) compared to non-biodegradable plastics. PLA and PLA blends generally come in the form of granulates. PLA is used on a limited scale for the production of films, fibers, plastic containers, cups, and bottles. PLA is also the most common type of plastic filament used for home fused deposition modeling in 3D printers.
Poly-3-hydroxybutyrate
The biopolymer poly-3-hydroxybutyrate (PHB) is a polyester produced by certain bacteria processing glucose, corn starch or wastewater. Its characteristics are similar to those of the petroplastic polypropylene (PP). PHB production is increasing. The South American sugar industry, for example, has decided to expand PHB production to an industrial scale. PHB is distinguished primarily by its physical characteristics. It can be processed into a transparent film with a melting point higher than 130 degrees Celsius, and is biodegradable without residue.
Polyhydroxyalkanoates
Polyhydroxyalkanoates (PHA) are linear polyesters produced in nature by bacterial fermentation of sugar or lipids. They are produced by the bacteria to store carbon and energy. In industrial production, the polyester is extracted and purified from the bacteria by optimizing the conditions for the fermentation of sugar. More than 150 different monomers can be combined within this family to give materials with extremely different properties. PHA is more ductile and less elastic than other plastics, and it is also biodegradable. These plastics are being widely used in the medical industry.
Polyamide 11
PA 11 is a biopolymer derived from natural oil. It is also known under the tradename Rilsan B, commercialized by Arkema. PA 11 belongs to the technical polymers family and is not biodegradable. Its properties are similar to those of PA 12, although emissions of greenhouse gases and consumption of nonrenewable resources are reduced during its production. Its thermal resistance is also superior to that of PA 12. It is used in high-performance applications like automotive fuel lines, pneumatic airbrake tubing, electrical cable antitermite sheathing, flexible oil and gas pipes, control fluid umbilicals, sports shoes, electronic device components, and catheters.
A similar plastic is Polyamide 410 (PA 410), derived 70% from castor oil, under the trade name EcoPaXX, commercialized by DSM.
PA 410 is a high-performance polyamide that combines the benefits of a high melting point (approx. 250 °C), low moisture absorption and excellent resistance to various chemical substances.
Bio-derived polyethylene
The basic building block (monomer) of polyethylene is ethylene. Ethylene is chemically similar to, and can be derived from ethanol, which can be produced by fermentation of agricultural feedstocks such as sugar cane or corn. Bio-derived polyethylene is chemically and physically identical to traditional polyethylene – it does not biodegrade but can be recycled. The Brazilian chemicals group Braskem claims that using its method of producing polyethylene from sugar cane ethanol captures (removes from the environment) 2.15 tonnes of per tonne of Green Polyethylene produced.
Genetically modified feedstocks
With GM corn being a common feedstock, it is unsurprising that some bioplastics are made from this.
Under the bioplastics manufacturing technologies there is the "plant factory" model, which uses genetically modified crops or genetically modified bacteria to optimize efficiency.
Polyhydroxyurethanes
The condensation of polyamines and cyclic carbonates produces polyhydroxyurethanes. Unlike traditional cross-linked polyurethanes, cross-linked polyhydroxyurethanes are in principle amenable to recycling and reprocessing through dynamic transcarbamoylation reactions.
Lipid derived polymers
A number bioplastic classes have been synthesized from plant and animal derived fats and oils. Polyurethanes, polyesters, epoxy resins and a number of other types of polymers have been developed with comparable properties to crude oil based materials. The recent development of olefin metathesis has opened a wide variety of feedstocks to economical conversion into biomonomers and polymers. With the growing production of traditional vegetable oils as well as low cost microalgae derived oils, there is huge potential for growth in this area.
In 2024, Lamanna et al. introduced oleogels based on ethyl cellulose and vegetable oils as a novel bioplastic, named OleoPlast. This bioplastic exhibits thermoplastic behavior, offering both recyclability and biodegradability. The key advantages of OleoPlast include the ability to customize its mechanical and physical properties, as well as its compatibility with different processing techniques, such as injection molding, hot pressing, extrusion, and fused filament fabrication.
Environmental impact
Materials such as starch, cellulose, wood, sugar and biomass are used as a substitute for fossil fuel resources to produce bioplastics; this makes the production of bioplastics a more sustainable activity compared to conventional plastic production. The environmental impact of bioplastics is often debated, as there are many different metrics for "greenness" (e.g., water use, energy use, deforestation, biodegradation, etc.). Hence bioplastic environmental impacts are categorized into nonrenewable energy use, climate change, eutrophication and acidification. Bioplastic production significantly reduces greenhouse gas emissions and decreases non-renewable energy consumption. Firms worldwide would also be able to increase the environmental sustainability of their products by using bioplastics
Although bioplastics save more nonrenewable energy than conventional plastics and emit less greenhouse gasses compared to conventional plastics, bioplastics also have negative environmental impacts such as eutrophication and acidification. Bioplastics induce higher eutrophication potentials than conventional plastics. Biomass production during industrial farming practices causes nitrate and phosphate to filtrate into water bodies; this causes eutrophication, the process in which a body of water gains excessive richness of nutrients. Eutrophication is a threat to water resources around the world since it causes harmful algal blooms that create oxygen dead zones, killing aquatic animals. Bioplastics also increase acidification. The high increase in eutrophication and acidification caused by bioplastics is also caused by using chemical fertilizer in the cultivation of renewable raw materials to produce bioplastics.
Other environmental impacts of bioplastics include exerting lower human and terrestrial ecotoxicity and carcinogenic potentials compared to conventional plastics. However, bioplastics exert higher aquatic ecotoxicity than conventional materials. Bioplastics and other bio-based materials increase stratospheric ozone depletion compared to conventional plastics; this is a result of nitrous oxide emissions during fertilizer application during industrial farming for biomass production. Artificial fertilizers increase nitrous oxide emissions especially when the crop does not need all the nitrogen. Minor environmental impacts of bioplastics include toxicity through using pesticides on the crops used to make bioplastics. Bioplastics also cause carbon dioxide emissions from harvesting vehicles. Other minor environmental impacts include high water consumption for biomass cultivation, soil erosion, soil carbon losses and loss of biodiversity, and they are mainly are a result of land use associated with bioplastics. Land use for bioplastics production leads to lost carbon sequestration and increases the carbon costs while diverting land from its existing uses
Although bioplastics are extremely advantageous because they reduce non-renewable consumption and GHG emissions, they also negatively affect the environment through land and water consumption, using pesticide and fertilizer, eutrophication and acidification; hence one's preference for either bioplastics or conventional plastics depends on what one considers the most important environmental impact.
Another issue with bioplastics, is that some bioplastics are made from the edible parts of crops. This makes the bioplastics compete with food production because the crops that produce bioplastics can also be used to feed people. These bioplastics are called "1st generation feedstock bioplastics".
2nd generation feedstock bioplastics use non-food crops (cellulosic feedstock) or waste materials from 1st generation feedstock (e.g. waste vegetable oil). Third generation feedstock bioplastics use algae as the feedstock.
Biodegradation of Bioplastics
Biodegradation of any plastic is a process that happens at solid/liquid interface whereby the enzymes in the liquid phase depolymerize the solid phase. Certain types of bioplastics as well as conventional plastics containing additives are able to biodegrade. Bioplastics are able to biodegrade in different environments hence they are more acceptable than conventional plastics. Biodegradability of bioplastics occurs under various environmental conditions including soil, aquatic environments and compost. Both the structure and composition of biopolymer or bio-composite have an effect on the biodegradation process, hence changing the composition and structure might increase biodegradability. Soil and compost as environment conditions are more efficient in biodegradation due to their high microbial diversity. Composting not only biodegrades bioplastics efficiently but it also significantly reduces the emission of greenhouse gases. Biodegradability of bioplastics in compost environments can be upgraded by adding more soluble sugar and increasing temperature. Soil environments on the other hand have high diversity of microorganisms making it easier for biodegradation of bioplastics to occur. However, bioplastics in soil environments need higher temperatures and a longer time to biodegrade. Some bioplastics biodegrade more efficiently in water bodies and marine systems; however, this causes danger to marine ecosystems and freshwater. Hence it is accurate to conclude that biodegradation of bioplastics in water bodies which leads to the death of aquatic organisms and unhealthy water can be noted as one of the negative environmental impacts of bioplastics.
Bioplastics for construction materials
The concept of bioplastics dates back to the early 20th century. However, significant advancements occurred in the 1980s and 1990s when researchers began developing biodegradable plastics from natural sources. The construction industry started to take notice of bioplastics' potential in the late 2000s, driven by the global push for greener building practices.
In recent years, bioplastics have seen considerable advancements in terms of durability, cost-effectiveness, and performance. Innovations in biopolymer blends and composites have made bioplastics more suitable for construction applications, ranging from insulation to structural components.
Applications in Construction
Insulation Bioplastics can be used to create effective and eco-friendly insulation materials. Polylactic acid (PLA) and polyhydroxyalkanoates (PHA) are commonly used for this purpose due to their thermal properties and biodegradability.
Flooring Bioplastic composites, such as those made from PLA and natural fibers, offer durable and sustainable alternatives to traditional flooring materials. They are particularly valued for their low carbon footprint and recyclability.
Panels and Cladding Bioplastic panels, made from blends of natural fibers and biopolymers, provide an eco-friendly option for wall cladding and partitioning. These materials are lightweight, durable, and can be designed to mimic traditional materials like wood or stone.
Formwork Bioplastics are increasingly used in formwork for concrete casting. They offer advantages in terms of reusability, weight reduction, and reduced environmental impact compared to conventional materials.
Reinforcement Bioplastic composites reinforced with natural fibers or other materials can be used in structural applications, offering a sustainable alternative to steel or fiberglass.
Benefits of Bioplastics in Construction Environmental Impact
Reduced Carbon Footprint Bioplastics are derived from renewable sources, significantly reducing the carbon footprint of construction materials.
Biodegradability Many bioplastics are biodegradable, which helps to reduce waste and environmental pollution at the end of their lifecycle.
Energy Efficiency The production of bioplastics generally requires less energy compared to conventional plastics, further reducing their environmental impact.
Economic Benefits
Resource Efficiency Using bioplastics can reduce dependence on fossil fuels and contribute to more efficient use of natural resources.
Market Growth The bioplastics market is expanding, driven by increasing demand for sustainable construction materials. This growth presents new economic opportunities for manufacturers and suppliers.
Challenges and Limitations
Cost Bioplastics are often more expensive to produce than traditional plastics, which can be a barrier to widespread adoption in the cost-sensitive construction industry. However, ongoing research and technological advancements are expected to reduce costs over time.
Performance While bioplastics have made significant strides, some types still lag behind traditional materials in terms of strength, durability, and resistance to environmental factors like UV exposure and moisture.
Limited Applications Currently, bioplastics are suitable for a limited range of applications within construction. Expanding their use to more demanding structural roles will require further development and testing.
Future Prospects
The future of bioplastics in construction looks promising, with continued research and innovation likely to expand their applications and improve their performance. As the construction industry increasingly embraces sustainability, bioplastics are poised to play a critical role in the development of eco-friendly building materials.
Bioplastics offer a sustainable and versatile alternative to traditional construction materials, with significant environmental and economic benefits. While challenges remain, particularly in terms of cost and performance, the ongoing advancements in bioplastic technology hold the potential to transform the construction industry and contribute to a more sustainable future.
Industry and markets
While plastics based on organic materials were manufactured by chemical companies throughout the 20th century, the first company solely focused on bioplastics—Marlborough Biopolymers—was founded in 1983. However, Marlborough and other ventures that followed failed to find commercial success, with the first such company to secure long-term financial success being the Italian company Novamont, founded in 1989.
Bioplastics remain less than one percent of all plastics manufactured worldwide. Most bioplastics do not yet save more carbon emissions than are required to manufacture them. It is estimated that replacing 250 million tons of the plastic manufactured each year with bio-based plastics would require 100 million hectares of land, or 7 percent of the arable land on Earth. And when bioplastics reach the end of their life cycle, those designed to be compostable and marketed as biodegradable are often sent to landfills due to the lack of proper composting facilities or waste sorting, where they then release methane as they break down anaerobically.
COPA (Committee of Agricultural Organisation in the European Union) and COGEGA (General Committee for the Agricultural Cooperation in the European Union) have made an assessment of the potential of bioplastics in different sectors of the European economy:
History and development of bioplastics
1855: First (inferior) version of linoleum produced
1862: At the Great London Exhibition, Alexander Parkes displays Parkesine, the first thermoplastic. Parkesine is made from nitrocellulose and had very good properties, but exhibits extreme flammability. (White 1998)
1897: Still produced today, Galalith is a milk-based bioplastic that was created by German chemists in 1897. Galalith is primarily found in buttons. (Thielen 2014)
1907: Leo Baekeland invented Bakelite, which received the National Historic Chemical Landmark for its non-conductivity and heat-resistant properties. It is used in radio and telephone casings, kitchenware, firearms and many more products. (Pathak, Sneha, Mathew 2014)
1912: Brandenberger invents Cellophane out of wood, cotton, or hemp cellulose. (Thielen 2014)
1920s: Wallace Carothers finds Polylactic Acid (PLA) plastic. PLA is incredibly expensive to produce and is not mass-produced until 1989. (Whiteclouds 2018)
1925: Polyhydroxybutyrate was isolated and characterised by French microbiologist Maurice Lemoigne
1926: Maurice Lemoigne invents polyhydroxybutyrate (PHB) which is the first bioplastic made from bacteria. (Thielen 2014)
1930s: The first bioplastic car was made from soy beans by Henry Ford. (Thielen 2014)
1940-1945: During World War II, an increase in plastic production is seen as it is used in many wartime materials. Due to government funding and oversight the United States production of plastics (in general, not just bioplastics) tripled during 1940-1945 (Rogers 2005). The 1942 U.S. government short film The Tree in a Test Tube illustrates the major role bioplastics played in the World War II victory effort and the American economy of the time.
1950s: Amylomaize (>50% amylose content corn) was successfully bred and commercial bioplastics applications started to be explored. (Liu, Moult, Long, 2009) A decline in bioplastic development is seen due to the cheap oil prices, however the development of synthetic plastics continues.
1970s: The environmental movement spurred more development in bioplastics. (Rogers 2005)
1983: The first bioplastics company, Marlborough Biopolymers, is started which uses a bacteria-based bioplastic called . (Feder 1985)
1989: The further development of PLA is made by Dr. Patrick R. Gruber when he figures out how to create PLA from corn. (Whiteclouds 2018). The leading bioplastic company is created called Novamount. Novamount uses matter-bi, a bioplastic, in multiple different applications. (Novamount 2018)
Late 1990s: The development of TP starch and BIOPLAST from research and production of the company BIOTEC lead to the BIOFLEX film. BIOFLEX film can be classified as blown film extrusion, flat film extrusion, and injection moulding lines. These three classifications have applications as follows: Blown films - sacks, bags, trash bags, mulch foils, hygiene products, diaper films, air bubble films, protective clothing, gloves, double rib bags, labels, barrier ribbons; Flat films - trays, flower pots, freezer products and packaging, cups, pharmaceutical packaging; Injection moulding - disposable cutlery, cans, containers, performed pieces, CD trays, cemetery articles, golf tees, toys, writing materials. (Lorcks 1998)
1992: It is reported in Science that PHB can be produced by the plant Arabidopsis thaliana. (Poirier, Dennis, Klomparens, Nawrath, Somerville 1992)
2001: Metabolix inc. purchases Monsanto's biopol business (originally Zeneca) which uses plants to produce bioplastics. (Barber and Fisher 2001)
2001: Nick Tucker uses elephant grass as a bioplastic base to make plastic car parts. (Tucker 2001)
2005: Cargill and Dow Chemicals is rebranded as NatureWorks and becomes the leading PLA producer. (Pennisi 2016)
2007: Metabolix inc. market tests its first 100% biodegradable plastic called Mirel, made from corn sugar fermentation and genetically engineered bacteria. (Digregorio 2009)
2012: A bioplastic is developed from seaweed proving to be one of the most environmentally friendly bioplastics based on research published in the journal of pharmacy research. (Rajendran, Puppala, Sneha, Angeeleena, Rajam 2012)
2013: A patent is put on bioplastic derived from blood and a crosslinking agent like sugars, proteins, etc. (iridoid derivatives, diimidates, diones, carbodiimides, acrylamides, dimethylsuberimidates, aldehydes, Factor XIII, dihomo bifunctional NHS esters, carbonyldiimide, , proanthocyanidin, reuterin). This invention can be applied by using the bioplastic as tissue, cartilage, tendons, ligaments, bones, and being used in stem cell delivery. (Campbell, Burgess, Weiss, Smith 2013)
2014: It is found in a study published in 2014 that bioplastics can be made from blending vegetable waste (parsley and spinach stems, the husks from cocoa, the hulls of rice, etc.) with TFA solutions of pure cellulose creates a bioplastic. (Bayer, Guzman-Puyol, Heredia-Guerrero, Ceseracciu, Pignatelli, Ruffilli, Cingolani, and Athanassiou 2014)
2016: An experiment finds that a car bumper that passes regulation can be made from nano-cellulose based bioplastic biomaterials using banana peels. (Hossain, Ibrahim, Aleissa 2016)
2017: A new proposal for bioplastics made from Lignocellulosics resources (dry plant matter). (Brodin, Malin, Vallejos, Opedal, Area, Chinga-Carrasco 2017)
2018: Many developments occur including Ikea starting industrial production of bioplastics furniture (Barret 2018), Project Effective focusing on replacing nylon with bio-nylon (Barret 2018), and the first packaging made from fruit (Barret 2018).
2019: Five different types of Chitin nanomaterials were extracted and synthesized by the 'Korea Research Institute of Chemical Technology' to verify strong personality and antibacterial effects. When buried underground, 100% biodegradation was possible within six months.
*This is not a comprehensive list. These inventions show the versatility of bioplastics and important breakthroughs. New applications and bioplastics inventions continue to occur.
Testing procedures
Industrial compostability – EN 13432, ASTM D6400
The EN 13432 industrial standard must be met in order to claim that a plastic product is compostable in the European marketplace. In summary, it requires multiple tests and sets pass/fail criteria, including disintegration (physical and visual break down) of the finished item within 12 weeks, biodegradation (conversion of organic carbon into ) of polymeric ingredients within 180 days, plant toxicity and heavy metals. The ASTM 6400 standard is the regulatory framework for the United States and has similar requirements.
Many starch-based plastics, PLA-based plastics and certain aliphatic-aromatic co-polyester compounds, such as succinates and adipates, have obtained these certificates. Additive-based bioplastics sold as photodegradable or Oxo Biodegradable do not comply with these standards in their current form.
Compostability – ASTM D6002
The ASTM D 6002 method for determining the compostability of a plastic defined the word compostable as follows:
that which is capable of undergoing biological decomposition in a compost site such that the material is not visually distinguishable and breaks down into carbon dioxide, water, inorganic compounds and biomass at a rate consistent with known compostable materials.
This definition drew much criticism because, contrary to the way the word is traditionally defined, it completely divorces the process of "composting" from the necessity of it leading to humus/compost as the end product. The only criterion this standard does describe is that a compostable plastic must look to be going away as fast as something else one has already established to be compostable under the traditional definition.
Withdrawal of ASTM D 6002
In January 2011, the ASTM withdrew standard ASTM D 6002, which had provided plastic manufacturers with the legal credibility to label a plastic as compostable. Its description is as follows:
This guide covered suggested criteria, procedures, and a general approach to establish the compostability of environmentally degradable plastics.
The ASTM has yet to replace this standard.
Biobased – ASTM D6866
The ASTM D6866 method has been developed to certify the biologically derived content of bioplastics. Cosmic rays colliding with the atmosphere mean that some of the carbon is the radioactive isotope carbon-14. CO2 from the atmosphere is used by plants in photosynthesis, so new plant material will contain both carbon-14 and carbon-12. Under the right conditions, and over geological timescales, the remains of living organisms can be transformed into fossil fuels. After ~100,000 years all the carbon-14 present in the original organic material will have undergone radioactive decay leaving only carbon-12. A product made from biomass will have a relatively high level of carbon-14, while a product made from petrochemicals will have no carbon-14. The percentage of renewable carbon in a material (solid or liquid) can be measured with an accelerator mass spectrometer.
There is an important difference between biodegradability and biobased content. A bioplastic such as high-density polyethylene (HDPE) can be 100% biobased (i.e. contain 100% renewable carbon), yet be non-biodegradable. These bioplastics such as HDPE nonetheless play an important role in greenhouse gas abatement, particularly when they are combusted for energy production. The biobased component of these bioplastics is considered carbon-neutral since their origin is from biomass.
Anaerobic biodegradability – ASTM D5511-02 and ASTM D5526
The ASTM D5511-12 and ASTM D5526-12 are testing methods that comply with international standards such as the ISO DIS 15985 for the biodegradability of plastic.
See also
Alkane
Biofuel
Biopolymer
BioSphere Plastic
Organisms breaking down plastic
Celluloid
Cutlery
Edible tableware
Food vs. fuel
Galalith
Health concerns of certain non-biodegradable (fossil fuel-based) plastic food packaging
Plastic bans
Organic photovoltaics
Sustainable packaging
References
Further reading
Plastics Without Petroleum History and Politics of 'Green' Plastics in the United States
Plastics and the environment
"The Social construction of Bakelite: Toward a theory of invention" in The Social Construction of Technological Systems, pp. 155–182
External links
Assessment of China's Market for Biodegradable Plastics , May 2017, GCiS China Strategic Research
Biodegradable waste management
Polymer chemistry | Bioplastic | [
"Chemistry",
"Materials_science",
"Engineering"
] | 7,681 | [
"Biodegradation",
"Biodegradable waste management",
"Materials science",
"Polymer chemistry"
] |
2,570,928 | https://en.wikipedia.org/wiki/%CE%92-Hydroxy%20%CE%B2-methylbutyric%20acid | β-Hydroxy β-methylbutyric acid (HMB), otherwise known as its conjugate base, , is a naturally produced substance in humans that is used as a dietary supplement and as an ingredient in certain medical foods that are intended to promote wound healing and provide nutritional support for people with muscle wasting due to cancer or HIV/AIDS. In healthy adults, supplementation with HMB has been shown to increase exercise-induced gains in muscle size, muscle strength, and lean body mass, reduce skeletal muscle damage from exercise, improve aerobic exercise performance, and expedite recovery from exercise. Medical reviews and meta-analyses indicate that HMB supplementation also helps to preserve or increase lean body mass and muscle strength in individuals experiencing age-related muscle loss. HMB produces these effects in part by stimulating the production of proteins and inhibiting the breakdown of proteins in muscle tissue. No adverse effects from long-term use as a dietary supplement in adults have been found.
HMB is sold as a dietary supplement at a cost of about per month when taking 3 grams per day. HMB is also contained in several nutritional products, including certain formulations of Ensure and Juven. HMB is also present in insignificant quantities in certain foods, such as alfalfa, asparagus, avocados, cauliflower, grapefruit, and catfish.
The effects of HMB on human skeletal muscle were first discovered by Steven L. Nissen at Iowa State University in the . HMB has not been banned by the National Collegiate Athletic Association, World Anti-Doping Agency, or any other prominent national or international athletic organization. In 2006, only about 2% of college student athletes in the United States used HMB as a dietary supplement. As of 2017, HMB has reportedly found widespread use as an ergogenic supplement among young athletes.
Uses
Available forms
HMB is sold as an over-the-counter dietary supplement in the free acid form, β-hydroxy β-methylbutyric acid (HMB-FA), and as a monohydrated calcium salt of the conjugate base, calcium monohydrate (HMB-Ca, CaHMB). Since only a small fraction of HMB's metabolic precursor, , is metabolized into HMB, pharmacologically active concentrations of the compound in blood plasma and muscle can only be achieved by supplementing HMB directly. A healthy adult produces approximately 0.3 grams per day, while supplemental HMB is usually taken in doses of grams per day. HMB is sold at a cost of about per month when taken in doses of 3 grams per day. HMB is also contained in several nutritional products and medical foods marketed by Abbott Laboratories (e.g., certain formulations of Ensure and Juven), and is present in insignificant quantities in certain foods, such as alfalfa, asparagus, avocados, cauliflower, grapefruit, and catfish.
Medical
Supplemental HMB has been used in clinical trials as a treatment for preserving lean body mass in muscle wasting conditions, particularly sarcopenia, and has been studied in clinical trials as an adjunct therapy in conjunction with resistance exercise. Based upon two medical reviews and a meta-analysis of seven randomized controlled trials, HMB supplementation can preserve or increase lean muscle mass and muscle strength in sarcopenic older adults. HMB does not appear to significantly affect fat mass in older adults. Preliminary clinical evidence suggests that HMB supplementation may also prevent muscle atrophy during bed rest. A growing body of evidence supports the efficacy of HMB in nutritional support for reducing, or even reversing, the loss of muscle mass, muscle function, and muscle strength that occurs in hypercatabolic disease states such as cancer cachexia; consequently, the authors of two 2016 reviews of the clinical evidence recommended that the prevention and treatment of sarcopenia and muscle wasting in general include supplementation with HMB, regular resistance exercise, and consumption of a high-protein diet.
Clinical trials that used HMB for the treatment of muscle wasting have involved the administration of 3 grams of HMB per day under different dosing regimens. According to one review, an optimal dosing regimen is to administer it in one 1 gram dose, three times a day, since this ensures elevated plasma concentrations of HMB throughout the day; however, the best dosing regimen for muscle wasting conditions is still being investigated.
Some branded products that contain HMB (i.e., certain formulations of Ensure and Juven) are medical foods that are intended to be used to provide nutritional support under the care of a doctor in individuals with muscle wasting due to HIV/AIDS or cancer, to promote wound healing following surgery or injury, or when otherwise recommended by a medical professional. Juven, a nutrition product which contains 3 grams of , 14 grams of -arginine, and 14 grams of -glutamine per two servings, has been shown to improve lean body mass during clinical trials in individuals with AIDS and cancer, but not rheumatoid cachexia. Further research involving the treatment of cancer cachexia with Juven over a period of several months is required to adequately determine treatment efficacy.
Enhancing performance
With an appropriate exercise program, dietary supplementation with 3 grams of HMB per day has been shown to increase exercise-induced gains in muscle size, muscle strength and power, and lean body mass, reduce exercise-induced skeletal muscle damage, and expedite recovery from high-intensity exercise. Based upon limited clinical research, HMB supplementation may also improve aerobic exercise performance and increase gains in aerobic fitness when combined with high-intensity interval training. These effects of HMB are more pronounced in untrained individuals and athletes who perform high intensity resistance or aerobic exercise. In resistance-trained populations, the effects of HMB on muscle strength and lean body mass are limited. HMB affects muscle size, strength, mass, power, and recovery in part by stimulating myofibrillar muscle protein synthesis and inhibiting muscle protein breakdown through various mechanisms, including the activation of mechanistic target of rapamycin complex 1 (mTORC1) and inhibition of proteasome-mediated proteolysis in skeletal muscles.
The efficacy of HMB supplementation for reducing skeletal muscle damage from prolonged or high-intensity exercise is affected by the time that it is used relative to exercise. The greatest reduction in skeletal muscle damage from a single bout of exercise has been shown to occur when is ingested hours prior to exercise or is ingested minutes prior to exercise.
In 2006, only about 2% of college student athletes in the United States used HMB as a dietary supplement. As of 2017, HMB has found widespread use as an ergogenic supplement among athletes. HMB has not been banned by the National Collegiate Athletic Association, World Anti-Doping Agency, or any other prominent national or international athletic organization.
Side effects
The safety profile of HMB in adult humans is based upon evidence from clinical trials in humans and animal studies. In humans, no adverse effects in young adults or older adults have been reported when HMB is taken in doses of 3 grams per day for up to a year. Studies on young adults taking 6 grams of HMB per day for up to 2 months have also reported no adverse effects. Studies with supplemental HMB on young, growing rats and livestock have reported no adverse effects based upon clinical chemistry or observable characteristics; for humans younger than 18, there is limited data on the safety of supplemental HMB. The human equivalent dose of HMB for the no-observed-adverse-effect level (NOAEL) that was identified in a rat model is approximately 0.4 g/kg of body weight per day.
Two animal studies have examined the effects of HMB supplementation in pregnant pigs on the offspring and reported no adverse effects on the fetus. No clinical testing with supplemental HMB has been conducted on pregnant women, and pregnant and lactating women are advised not to take HMB by Metabolic Technologies, Inc., the company that grants licenses to include HMB in dietary supplements, due to a lack of safety studies.
Pharmacology
Pharmacodynamics
Several components of the signaling cascade that mediates the HMB-induced increase in human skeletal muscle protein synthesis have been identified in vivo. Similar to HMB's metabolic precursor, , HMB has been shown to increase protein synthesis in human skeletal muscle via phosphorylation of the mechanistic target of rapamycin (mTOR) and subsequent activation of , which leads to protein biosynthesis in cellular ribosomes via phosphorylation of mTORC1's immediate targets (i.e., the p70S6 kinase and the translation repressor protein 4EBP1). Supplementation with HMB in several non-human animal species has been shown to increase the serum concentration of growth hormone and insulin-like growth factor 1 (IGF-1) via an unknown mechanism, in turn promoting protein synthesis through increased mTOR phosphorylation. Based upon limited clinical evidence in humans, supplemental HMB appears to increase the secretion of growth hormone and IGF-1 in response to resistance exercise.
, the signaling cascade that mediates the HMB-induced reduction in muscle protein breakdown has not been identified in living humans, although it is well-established that it attenuates proteolysis in humans in vivo. Unlike , HMB attenuates muscle protein breakdown in an insulin-independent manner in humans. HMB is believed to reduce muscle protein breakdown in humans by inhibiting the 19S and 20S subunits of the ubiquitin–proteasome system in skeletal muscle and by inhibiting apoptosis of skeletal muscle nuclei via unidentified mechanisms.
Based upon animal studies, HMB appears to be metabolized within skeletal muscle into cholesterol, which may then be incorporated into the muscle cell membrane, thereby enhancing membrane integrity and function. The effects of HMB on muscle protein metabolism may help stabilize muscle cell structure. One review suggested that the observed HMB-induced reduction in the plasma concentration of muscle damage biomarkers (i.e., muscle enzymes such as creatine kinase and lactate dehydrogenase) in humans following intense exercise may be due to a cholesterol-mediated improvement in muscle cell membrane function.
HMB has been shown to stimulate the proliferation, differentiation, and fusion of human myosatellite cells in vitro, which potentially increases the regenerative capacity of skeletal muscle, by increasing the protein expression of certain myogenic regulatory factors (e.g., myoD and myogenin) and gene transcription factors (e.g., MEF2). HMB-induced human myosatellite cell proliferation in vitro is mediated through the phosphorylation of the mitogen-activated protein kinases ERK1 and ERK2. HMB-induced human myosatellite differentiation and accelerated fusion of myosatellite cells into muscle tissue in vitro is mediated through the phosphorylation of Akt, a serine/threonine-specific protein kinase.
Pharmacokinetics
Comparison of pharmacokinetics between dosage forms
The free acid () and monohydrated calcium salt () forms of HMB have different pharmacokinetics. HMB-FA is more readily absorbed into the bloodstream and has a longer elimination half-life (3 hours) relative to HMB-Ca (2.5 hours). Tissue uptake and utilization of HMB-FA is higher than for HMB-Ca. The fraction of an ingested dose that is excreted in urine does not differ between the two forms.
Absorption of HMB-Ca
After ingestion, is converted to following dissociation of the calcium moiety in the gut. When the HMB-Ca dosage form is ingested, the magnitude and time at which the peak plasma concentration of HMB occurs depends on the dose and concurrent food intake. Higher HMB-Ca doses increase the rate of absorption, resulting in a peak plasma HMB level (Cmax) that is disproportionately greater than expected of a linear dose-response relationship and which occurs sooner relative to lower doses. Consumption of HMB-Ca with sugary substances slows the rate of HMB absorption, resulting in a lower peak plasma HMB level that occurs later.
Excretion of HMB-Ca
HMB is eliminated via the kidneys, with roughly of an ingested dose being excreted unchanged in urine. The remaining of the dose is retained in tissues or excreted as HMB metabolites. The fraction of a given dose of HMB that is excreted unchanged in urine increases with the dose.
Metabolism
The metabolism of HMB is catalyzed by an uncharacterized enzyme which converts it to (). HMB-CoA is metabolized by either enoyl-CoA hydratase or another uncharacterized enzyme, producing β-methylcrotonyl-CoA () or hydroxymethylglutaryl-CoA () respectively. is then converted by the enzyme methylcrotonyl-CoA carboxylase to methylglutaconyl-CoA (), which is subsequently converted to by methylglutaconyl-CoA hydratase. is then cleaved into and acetoacetate by lyase or used in the production of cholesterol via the mevalonate pathway.
Biosynthesis
HMB is synthesized in the human body through the metabolism of , a branched-chain amino acid. In healthy individuals, approximately 60% of dietary is metabolized after several hours, with roughly 5% ( range) of dietary being converted to . Around 40% of dietary is converted to , which is subsequently used in the synthesis of other compounds.
The vast majority of metabolism is initially catalyzed by the branched-chain amino acid aminotransferase enzyme, producing (α-KIC). α-KIC is mostly metabolized by the mitochondrial enzyme branched-chain dehydrogenase, which converts it to isovaleryl-CoA. Isovaleryl-CoA is subsequently metabolized by isovaleryl-CoA dehydrogenase and converted to , which is used in the synthesis of acetyl-CoA and other compounds. During biotin deficiency, HMB can be synthesized from via enoyl-CoA hydratase and an unknown thioesterase enzyme, which convert into and into HMB respectively. A relatively small amount of α-KIC is metabolized in the liver by the cytosolic enzyme 4-hydroxyphenylpyruvate dioxygenase (KIC dioxygenase), which converts α-KIC to HMB. In healthy individuals, this minor pathway – which involves the conversion of to α-KIC and then HMB – is the predominant route of HMB synthesis.
Chemistry
acid is a monocarboxylic β-hydroxy acid and natural product with the molecular formula . At room temperature, pure acid occurs as a transparent, colorless to light yellow liquid which is soluble in water. acid is a weak acid with a pKa of 4.4. Its refractive index () is 1.42.
Chemical structure
acid is a member of the carboxylic acid family of organic compounds. It is a structural analog of butyric acid with a hydroxyl functional group and a methyl substituent located on its beta carbon. By extension, other structural analogs include acid and acid.
Synthesis
A variety of synthetic routes to acid have been developed. The first reported chemical syntheses approached HMB by oxidation of alkene, vicinal diol, and alcohol precursors:
in 1877, Russian chemists Michael and Alexander Zaytsev reported the preparation of HMB by oxidation of 2-methylpent-4-en-2-ol with chromic acid (H2CrO4);
in 1880 and 1889, Schirokoff and Reformatsky (respectively) reported that the oxidative cleavage of the vicinal diol 4-methylpentane-1,2,4-triol with acidified potassium permanganate (KMnO4) yields HMB – this result is closest related to the first synthesis as cold dilute KMnO4 oxidises alkenes to vicinal cis-diols which hot acid KMnO4 further oxidises to carbonyl-containing compounds, and the diol intermediate is not obtained when hot acidic conditions are used for alkene oxidation. In other words, racemic 4-methylpentane-1,2,4-triol is a derivative of 2-methylpent-4-en-2-ol and β-hydroxy β-methylbutyric acid is a derivative of both; and,
in 1892, Kondakow reported the preparation of HMB by permanganate oxidation of 3-methylbutane-1,3-diol.
Depending on the experimental conditions, cycloaddition of acetone and ketene produces either or 4,4-dimethyloxetan-2-one, both of which hydrolyze under basic conditions to yield the conjugate base of HMB. The haloform reaction provides another pathway to HMB involving the exhaustive halogenation of the methyl-ketone region of diacetone alcohol with sodium hypobromite or sodium hypochlorite; Diacetone alcohol is readily available from the aldol condensation of acetone. An organometallic approach to HMB involves the carboxylation of tert-butyl alcohol with carbon monoxide and Fenton's reagent (hydrogen peroxide and ferrous iron). Alternatively, HMB can be prepared through microbial oxidation of acid by the fungus Galactomyces reessii.
Detection in body fluids
The concentration of naturally produced HMB has been measured in several human body fluids using nuclear magnetic resonance spectroscopy, liquid chromatography–mass spectrometry, and gas chromatography–mass spectrometry methods. In the blood plasma and cerebrospinal fluid (CSF) of healthy adults, the average molar concentration of HMB has been measured at 4.0 micromolar (μM). The average concentration of HMB in the intramuscular fluid of healthy men of ages has been measured at 7.0 μM. In the urine of healthy individuals of any age, the excreted urinary concentration of HMB has been measured in a range of micromoles per millimole (μmol/mmol) of creatinine. In the breast milk of healthy lactating women, HMB and have been measured in ranges of μg/L and mg/L. In comparison, HMB has been detected and measured in the milk of healthy cows at a concentration of μg/L. This concentration is far too low to be an adequate dietary source of HMB for obtaining pharmacologically active concentrations of the compound in blood plasma.
In a study where participants consumed 2.42 grams of pure while fasting, the average plasma HMB concentration increased from a basal level of 5.1 to 408 μM after 30 minutes. At 150 minutes post-ingestion, the average plasma HMB concentration among participants was 275 μM.
Abnormal HMB concentrations in urine and blood plasma have been noted in several disease states where it may serve as a diagnostic biomarker, particularly in the case of metabolic disorders. The following table lists some of these disorders along with the associated HMB concentrations detected in urine or blood plasma.
History
The first reported chemical synthesis of HMB was published in 1877 by the Russian chemists Michael and Alexander Zaytsev. HMB was isolated from the bark of Erythrophleum couminga (a Madagascan tree) in 1941 by Leopold Ružička. The earliest reported isolation of HMB as a human metabolite was by Tanaka and coworkers in 1968 from a patient with isovaleric acidemia.
The effects of HMB on human skeletal muscle were first discovered by Steven L. Nissen at Iowa State University in the . Nissen founded a company called Metabolic Technologies, Inc. (MTI) around the time of his discovery, which later acquired six HMB-related patents that the company has used to license the right to manufacture and incorporate HMB into dietary supplements. When it first became available commercially in the late 1990s, HMB was marketed solely as an exercise supplement to help athletes and bodybuilders build muscle. MTI subsequently developed two HMB-containing products, Juven and Revigor, to which Abbott Nutrition obtained the market rights in 2003 and 2008 respectively. Since then, Abbott has marketed Juven as a medical food and the Revigor brand of HMB as an active ingredient in food products (e.g., certain formulations of Ensure) and other medical foods (e.g., certain formulations of Juven).
See also
3-Aminoisobutyric acid
Notes
Reference notes
References
External links
Amino acid derivatives
Biomolecules
Bodybuilding supplements
Dietary supplements
Ergogenic aids
Human metabolites
Beta hydroxy acids
Medical food
Proteasome inhibitors
Physiology
Rehabilitation medicine
Muscular disorders | Β-Hydroxy β-methylbutyric acid | [
"Chemistry",
"Biology"
] | 4,423 | [
"Natural products",
"Physiology",
"Organic compounds",
"Structural biology",
"Biomolecules",
"Biochemistry",
"Molecular biology"
] |
20,766,780 | https://en.wikipedia.org/wiki/Nuclear%20fusion%E2%80%93fission%20hybrid | Hybrid nuclear fusion–fission (hybrid nuclear power) is a proposed means of generating power by use of a combination of nuclear fusion and fission processes.
The basic idea is to use high-energy fast neutrons from a fusion reactor to trigger fission in non-fissile fuels like U-238 or Th-232. Each neutron can trigger several fission events, multiplying the energy released by each fusion reaction hundreds of times. As the fission fuel is not fissile, there is no self-sustaining chain reaction from fission. This would not only make fusion designs more economical in power terms, but also be able to burn fuels that were not suitable for use in conventional fission plants, even their nuclear waste.
In general terms, the hybrid is very similar in concept to the fast breeder reactor, which uses a compact high-energy fission core in place of the hybrid's fusion core. Another similar concept is the accelerator-driven subcritical reactor, which uses a particle accelerator to provide the neutrons instead of nuclear reactions.
History
The concept dates to the 1950s, and was strongly advocated by Hans Bethe during the 1970s. At that time the first powerful fusion experiments were being built, but it would still be many years before they could be economically competitive. Hybrids were proposed as a way of greatly accelerating their market introduction, producing energy even before the fusion systems reached break-even. However, detailed studies of the economics of the systems suggested they could not compete with existing fission reactors.
The idea was abandoned and lay dormant until the continued delays in reaching break-even led to a brief revival of the concept around 2009. These studies generally concentrated on the nuclear waste disposal aspects of the design, as opposed to the production of energy. The concept has seen cyclical interest since then, based largely on the success or failure of more conventional solutions like the Yucca Mountain nuclear waste repository
Another major design effort for energy production was started at Lawrence Livermore National Laboratory (LLNL) under their LIFE program. Industry input led to the abandonment of the hybrid approach for LIFE, which was then re-designed as a pure-fusion system. LIFE was cancelled when the underlying technology, from the National Ignition Facility, failed to reach its design performance goals.
Apollo Fusion, a company founded by Google executive Mike Cassidy in 2017, was also reported to be focused on using the subcritical nuclear fusion-fission hybrid method. Their web site is now focussed on their hall effect thrusters, and mentions fusion only in passing.
On 2022, September 9, Professor Peng Xianjue of the Chinese Academy of Engineering Physics announced that the Chinese government had approved the construction of the world's largest pulsed-powerplant - the Z-FFR, namely Z(-pinch)-Fission-Fusion Reactor- in Chengdu, Sichuan province. Neutrons produced in a Z-pinch facility (endowed with cylindrical symmetry and fuelled with deuterium and tritium) will strike a coaxial blanket including both uranium and lithium isotopes. Uranium fission will boost the facility's overall heat output by 10 to 20 times. Interaction of lithium and neutrons will provide tritium for further fueling. Innovative, quasi-spherical geometry near the core of Z-FFR leads to high performance of Z-pinch discharge. According to Prof. Xianjue, this will considerably speed up the use of fusion energy and prepare it for commercial power production by 2035.
Description
Fission basics
Conventional fission power systems relies on a chain reaction of nuclear fission events that release two or three neutrons that cause further fission events. By careful arrangement and the use of various absorber materials, the system can be set in a balance of released and absorbed neutrons, known as criticality.
Natural uranium is a mix of several isotopes, mainly a trace amount of 235U and over 99% 238U. When they undergo fission, both of these isotopes release fast neutrons with an energy distribution peaking around 1 to 2 MeV. This energy is too low to cause fission in 238U, which means it cannot sustain a chain reaction. 235U will undergo fission when struck by neutrons of this energy, so 235U a chain reaction. There are too few 235U atoms in natural uranium to sustain a chain reaction, the atoms are spread out too far and the chance a neutron will hit one is too small. Chain reactions are accomplished by concentrating, or enriching, the fuel, increasing the amount of 235U to produce enriched uranium, while the leftover, now mostly 238U, is a waste product known as depleted uranium. 235U will sustain a chain reaction if enriched to about 20% of the fuel mass.
235U will undergo fission more easily if the neutrons are of lower energy, the so-called thermal neutrons. Neutrons can be slowed to thermal energies through collisions with a neutron moderator material, the easiest to use are the hydrogen atoms found in water. By placing the fission fuel in water, the probability that the neutrons will cause fission in another 235U is greatly increased, which means the level of enrichment needed to reach criticality is greatly reduced. This leads to the concept of reactor-grade enriched uranium, with the amount of 235U increased from just less than 1% in natural ore to between 3 and 5%, depending on the reactor design. This is in contrast to weapons-grade enrichment, which increases to the 235U to at least 20%, and more commonly, over 90%.
To maintain criticality, the fuel has to retain that extra concentration of 235U. A typical fission reactor burns off enough of the 235U to cause the reaction to stop over a period on the order of a few months. A combination of burnup of the 235U along with the creation of neutron absorbers, or poisons, as part of the fission process eventually results in the fuel mass not being able to maintain criticality. This burned-up fuel has to be removed and replaced with fresh fuel. The result is nuclear waste that is highly radioactive and filled with long-lived radionuclides that present a safety concern.
The waste contains most of the 235U it started with, only 1% or so of the energy in the fuel is extracted by the time it reaches the point where it is no longer fissile. One solution to this problem is to reprocess the fuel, which uses chemical processes to separate the 235U (and other non-poison elements) from the waste, and then mixes the extracted 235U in fresh fuel loads. This reduces the amount of new fuel that needs to be mined and also concentrates the unwanted portions of the waste into a smaller load. Reprocessing is expensive, however, and it has generally been more economical to simply buy fresh fuel from the mine.
Like 235U, 239Pu can maintain a chain reaction, so it is a useful reactor fuel. However, 239Pu is not found in commercially useful amounts in nature. Another possibility is to breed 239Pu from the 238U through neutron capture, or various other means. This process only occurs with higher-energy neutrons than would be found in a moderated reactor, so a conventional reactor only produces small amounts of Pu when the neutron is captured within the fuel mass before it is moderated.
It is possible to build a reactor that does not require a moderator. To do so, the fuel has to be further enriched, to the point where the 235U is common enough to maintain criticality even with fast neutrons. The extra fast neutrons escaping the fuel load can then be used to breed fuel in a 238U assembly surrounding the reactor core, most commonly taken from the stocks of depleted uranium. 239Pu can also be used for the core, which means once the system is up and running, it can be refuelled using the 239Pu it creates, with enough left over to feed into other reactors as well. This concept is known as a breeder reactor.
Extracting the 239Pu from the 238U feedstock can be achieved with chemical processing, in the same fashion as normal reprocessing. The difference is that the mass will contain far fewer other elements, particularly some of the highly radioactive fission products found in normal nuclear waste.
Fusion basics
Fusion reactors typically burn a mixture of deuterium (D) and tritium (T). When heated to millions of degrees, the kinetic energy in the fuel begins to overcome the natural electrostatic repulsion between nuclei, the so-called coulomb barrier, and the fuel begins to undergo fusion. This reaction gives off an alpha particle and a high energy neutron of 14 MeV. A key requirement to the economic operation of a fusion reactor is that the alphas deposit their energy back into the fuel mix, heating it so that additional fusion reactions take place. This leads to a condition not unlike the chain reaction in the fission case, known as ignition.
Building a reactor design that is capable of reaching ignition has proven to be a significant problem. The first attempts to build such a reactor took place in 1938, and the first success was in 2022, 84 years later. Even in that case, the amount of energy released was orders of magnitude less than the energy needed to operate the machine. A reactor that produces more electricity than is used to operate it, a condition known as engineering breakeven, will require decades more work.
Additionally, there is an issue of fueling such a reactor. Deuterium can be obtained by the separation of hydrogen isotopes in seawater (see heavy water production). Tritium has a short half-life of just over a decade, so only trace amounts are found in nature. To fuel the reactor, the neutrons from the reaction are used to breed more tritium through a reaction in a blanket of lithium surrounding the reaction chamber. Tritium breeding is key to the success of a D-T fusion cycle, and to date, this technique has not been demonstrated. Predictions based on computer modelling suggest that the breeding ratios are quite small and a fusion plant would barely be able to cover its own use. Many years would be needed to breed enough surplus to start another reactor.
Hybrid concepts
Fusion–fission designs essentially replace the lithium blanket of a typical fusion design with a blanket of fission fuel, either natural uranium ore or even nuclear waste. The fusion neutrons have more than enough energy to cause fission in the 238U, as well as many of the other elements in the fuel, including some of the transuranic waste elements. The reaction can continue even when all of the 235U is burned off; the rate is controlled not by the neutrons from the fission events, but by the neutrons being supplied by the fusion reactor.
Fission occurs naturally because each event gives off more than one neutron capable of producing additional fission events. Fusion, at least in D-T fuel, gives off only a single neutron, and that neutron is not capable of producing more fusion events. When that neutron strikes fissile material in the blanket, one of two reactions may occur. In many cases, the kinetic energy of the neutron will cause one or two neutrons to be struck out of the nucleus without causing fission. These neutrons still have enough energy to cause other fission events. In other cases, the neutron will be captured and cause fission, which will release two or three neutrons. This means that every fusion neutron in the fusion–fission design can result in anywhere between two and four neutrons in the fission fuel.
This is a key concept in the hybrid concept, known as fission multiplication. For every fusion event, several fission events may occur, each of which gives off much more energy than the original fusion, about 11 times. This greatly increases the total power output of the reactor. This has been suggested as a way to produce practical fusion reactors even though no fusion reactor has yet reached break-even, by multiplying the power output using cheap fuel or waste. However, many studies have repeatedly demonstrated that this only becomes practical when the overall reactor is very large, 2 to 3 GWt, which makes it expensive to build.
These processes also have the side-effect of breeding 239Pu or 233U, which can be removed and used as fuel in conventional fission reactors. This leads to an alternate design where the primary purpose of the fusion–fission reactor is to reprocess waste into new fuel. Although far less economical than chemical reprocessing, this process also burns off some of the nastier elements instead of simply physically separating them out. This also has advantages for non-proliferation, as enrichment and reprocessing technologies are also associated with nuclear weapons production. However, the cost of the nuclear fuel produced is very high and is unlikely to be able to compete with conventional sources.
Neutron economy
A key issue for the fusion–fission concept is the number and lifetime of the neutrons in the various processes, the so-called neutron economy.
In a pure fusion design, the neutrons are used for breeding tritium in a lithium blanket. Natural lithium consists of about 92% 7Li and the rest is mostly 6Li. 7Li breeding requires neutron energies even higher than those released by fission, around 5 MeV, well within the range of energies provided by fusion. This reaction produces tritium and helium-4, and another slow neutron. 6Li can react with high or low energy neutrons, including those released by the 7Li reaction. This means that a single fusion reaction can produce several tritiums, which is a requirement if the reactor is going to make up for natural decay and losses in the fusion processes.
When the lithium blanket is replaced, or supplanted, by fission fuel in the hybrid design, neutrons that do react with the fissile material are no longer available for tritium breeding. The new neutrons released from the fission reactions can be used for this purpose, but only in 6Li. One could process the lithium to increase the amount of 6Li in the blanket, making up for these losses, but the downside to this process is that the 6Li reaction only produces one tritium atom. Only the high-energy reaction between the fusion neutron and 7Li can create more than one tritium, and this is essential for keeping the reactor running.
To address this issue, at least some of the fission neutrons must also be used for tritium breeding in 6Li. Every neutron that does is no longer available for fission, reducing the reactor output. This requires a very careful balance if one wants the reactor to be able to produce enough tritium to keep itself running, while also producing enough fission events to keep the fission side energy positive. If these cannot be accomplished simultaneously, there is no reason to build a hybrid. Even if this balance can be maintained, it might only occur at an economically infeasible level. For this reason, a variety of neutron releasing substances have been suggested as a way to multiply the number of neutrons available.
Overall economy
Through the early development of the hybrid concept, the question of overall economics appeared difficult to answer. A series of studies starting in the late 1970s provided a much clearer picture of the hybrid in a complete fuel cycle and allowed the economics to be better understood. These studies indicated there was no reason to build a hybrid.
One of the most detailed of these studies was published in 1980 by Los Alamos National Laboratory (LANL). They noted that the hybrid would produce most of its energy indirectly, both through the fission events in the reactor, and much more by providing 239Pu to fuel other fission reactors. In this overall picture, the hybrid is essentially identical to the breeder reactor in the same fashion as the hybrid. Both require chemical processing to remove the bred 239Pu, both presented the same proliferation and safety risks as a result, and both produced about the same amount of fuel. Since the bred fuel is the primary source of energy in the overall cycle, the two systems were almost identical in the end.
What was not identical, however, was the technical maturity of the two designs. The hybrid would require considerable additional research and development before it would be known if it could even work, and even if that were demonstrated, the result would be a system essentially identical to breeders which were already being built at that time. The report concluded:
The investment of time and money required to commercialize the hybrid cycle could only be justified by a real or perceived advantage of the hybrid over the classical FBR. Our analysis leads us to conclude that no such advantage exists. Therefore, there is not sufficient incentive to demonstrate and commercialize the fusion–fission hybrid.
Rationale
The fusion process alone currently does not achieve sufficient gain (power output over power input) to be viable as a power source. By using the excess neutrons from the fusion reaction to in turn cause a high-yield fission reaction (close to 100%) in the surrounding subcritical fissionable blanket, the net yield from the hybrid fusion–fission process can provide a targeted gain of 100 to 300 times the input energy (an increase by a factor of three or four over fusion alone). Even allowing for high inefficiencies on the input side (i.e. low laser efficiency in ICF and Bremsstrahlung losses in Tokamak designs), this can still yield sufficient heat output for economical electric power generation. This can be seen as a shortcut to viable fusion power until more efficient pure fusion technologies can be developed, or as an end in itself to generate power, and also consume existing stockpiles of nuclear fissionables and waste products.
In the LIFE project at the Lawrence Livermore National Laboratory LLNL, using technology developed at the National Ignition Facility, the goal is to use fuel pellets of deuterium and tritium surrounded by a fissionable blanket to produce energy sufficiently greater than the input (laser) energy for electrical power generation. The principle involved is to induce inertial confinement fusion (ICF) in the fuel pellet which acts as a highly concentrated point source of neutrons which in turn converts and fissions the outer fissionable blanket. In parallel with the ICF approach, the University of Texas at Austin is developing a system based on the tokamak fusion reactor, optimising for nuclear waste disposal versus power generation. The principles behind using either ICF or tokamak reactors as a neutron source are essentially the same (the primary difference being that ICF is essentially a point-source of neutrons while Tokamaks are more diffuse toroidal sources).
Use to dispose of nuclear waste
The surrounding blanket can be a fissile material (enriched uranium or plutonium) or a fertile material (capable of conversion to a fissionable material by neutron bombardment) such as thorium, depleted uranium or spent nuclear fuel. Such subcritical reactors (which also include particle accelerator-driven neutron spallation systems) offer the only currently-known means of active disposal (versus storage) of spent nuclear fuel without reprocessing. Fission by-products produced by the operation of commercial light water nuclear reactors (LWRs) are long-lived and highly radioactive, but they can be consumed using the excess neutrons in the fusion reaction along with the fissionable components in the blanket, essentially destroying them by nuclear transmutation and producing a waste product which is far safer and less of a risk for nuclear proliferation. The waste would contain significantly reduced concentrations of long-lived, weapons-usable actinides per gigawatt-year of electric energy produced compared to the waste from a LWR. In addition, there would be about 20 times less waste per unit of electricity produced. This offers the potential to efficiently use the very large stockpiles of enriched fissile materials, depleted uranium, and spent nuclear fuel.
Safety
In contrast to current commercial fission reactors, hybrid reactors potentially demonstrate what is considered inherently safe behavior because they remain deeply subcritical under all conditions and decay heat removal is possible via passive mechanisms. The fission is driven by neutrons provided by fusion ignition events, and is consequently not self-sustaining. If the fusion process is deliberately shut off or the process is disrupted by a mechanical failure, the fission damps out and stops nearly instantly. This is in contrast to the forced damping in a conventional reactor by means of control rods which absorb neutrons to reduce the neutron flux below the critical, self-sustaining, level. The inherent danger of a conventional fission reactor is any situation leading to a positive feedback, runaway, chain reaction such as occurred during the Chernobyl disaster. In a hybrid configuration the fission and fusion reactions are decoupled, i.e. while the fusion neutron output drives the fission, the fission output has no effect whatsoever on the fusion reaction, eliminating any chance of a positive feedback loop.
Fuel cycle
There are three main components to the hybrid fusion fuel cycle: deuterium, tritium, and fissionable elements. Deuterium can be derived by the separation of hydrogen isotopes in seawater (see heavy water production). Tritium may be generated in the hybrid process itself by absorption of neutrons in lithium bearing compounds. This would entail an additional lithium-bearing blanket and a means of collection. Small amounts of tritium are also produced by neutron activation in nuclear fission reactors, particularly when heavy water is used as a neutron moderator or coolant. The third component is externally derived fissionable materials from demilitarized supplies of fissionables, or commercial nuclear fuel and waste streams. Fusion driven fission also offers the possibility of using thorium as a fuel, which would greatly increase the potential amount of fissionables available. The extremely energetic nature of the fast neutrons emitted during the fusion events (up to 0.17 the speed of light) can allow normally non-fissioning 238U to undergo fission directly (without conversion first to 239Pu), enabling refined natural Uranium to be used with very low enrichment, while still maintaining a deeply subcritical regime.
Engineering considerations
Practical engineering designs must first take into account safety as the primary goal. All designs should incorporate passive cooling in combination with refractory materials to prevent melting and reconfiguration of fissionables into geometries capable of un-intentional criticality. Blanket layers of Lithium bearing compounds will generally be included as part of the design to generate Tritium to allow the system to be self-supporting for one of the key fuel element components. Tritium, because of its relatively short half-life and extremely high radioactivity, is best generated on-site to obviate the necessity of transportation from a remote location. D-T fuel can be manufactured on-site using Deuterium derived from heavy water production and Tritium generated in the hybrid reactor itself. Nuclear spallation to generate additional neutrons can be used to enhance the fission output, with the caveat that this is a tradeoff between the number of neutrons (typically 20-30 neutrons per spallation event) against a reduction of the individual energy of each neutron. This is a consideration if the reactor is to use natural Thorium as a fuel. While high energy (0.17c) neutrons produced from fusion events are capable of directly causing fission in both Thorium and 238U, the lower energy neutrons produced by spallation generally cannot. This is a tradeoff that affects the mixture of fuels against the degree of spallation used in the design.
See also
Subcritical reactor, a broad category of designs using various external neutron sources including spallation to generate non-self-sustaining fission (hybrid fusion–fission reactors fall into this category).
Muon-catalyzed fusion, which uses exotic particles to achieve fusion ignition at relatively low temperatures.
Breeder reactor, a nuclear reactor that generates more fissile material in fuel than it consumes.
Generation IV reactor, next generation fission reactor designs claiming much higher safety, and greatly increased fuel use efficiency.
Traveling wave reactor, a pure fission reactor with a moving reaction zone, which is also capable of consuming wastes from LWRs and using depleted Uranium as a fuel.
Liquid fluoride thorium reactor, a fission reactor which uses molten thorium fluoride salt fuel, capable of consuming wastes from LWRs.
Integral Fast Reactor, a fission fast breeder reactor which uses reprocessing via electrorefining at the reactor site, capable of consuming wastes from LWRs and using depleted Uranium as a fuel.
Aneutronic fusion a category of nuclear reactions in which only a small part (or none) of the energy released is carried away by energetic neutrons.
Project PACER, a reverse of this concept, attempts to use small fission explosions to ignite hydrogen fusion (fusion bombs) for power generation
Cold fusion
COLEX process (isotopic separation)
References
Citations
Bibliography
Further reading
External links
Potential Role of Lasers for Sustainable Fission Energy Production and Transmutation of Nuclear Waste C.D. Bowman and J. Magill
Laser Inertial Fusion–Fission Energy (LIFE) Project at the Lawrence Livermore National Laboratory,
Nuclear Fusion–Fission Hybrid Could Destroy Nuclear Waste And Contribute to Carbon-Free Energy Future University of Texas at Austin
Ralph Moir's fusion–fission hybrid page – contains many research papers about the topic
International Thorium Energy Organisation - www.IThEO.org
Nuclear technology
Nuclear fusion
Nuclear fission | Nuclear fusion–fission hybrid | [
"Physics",
"Chemistry"
] | 5,142 | [
"Nuclear fission",
"Nuclear technology",
"Nuclear fusion",
"Nuclear physics"
] |
20,769,112 | https://en.wikipedia.org/wiki/Asymmetric%20membrane%20capsule | The asymmetric membrane capsule is an example of a single core osmotic delivery system, consisting of a drug-containing core surrounded by an asymmetric membrane made with a non disintegrating polymer (cellulose acetate, ethylcellulose etc.).
References
Dosage forms | Asymmetric membrane capsule | [
"Chemistry"
] | 65 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
20,770,305 | https://en.wikipedia.org/wiki/Zinc-activated%20ion%20channel | Zinc-activated ion channel (ZAC), is a human protein encoded by the gene. ZAC forms a cation-permeable ligand-gated ion channel of the "Cys-loop" superfamily. The ZAC gene is present in humans and dogs, but no ortholog is thought to exist in the rat or mouse genomes.
ZAC mRNA is expressed in prostate, thyroid, trachea, lung, brain (adult and fetal), spinal cord, skeletal muscle, heart, placenta, pancreas, liver, kidney and stomach. The endogenous ligand for ZAC is thought to be Zn2+, although ZAC has also been found to activate spontaneously. The function of spontaneous ZAC activation is unknown.
References
Ion channels
Zinc | Zinc-activated ion channel | [
"Chemistry"
] | 155 | [
"Neurochemistry",
"Ion channels"
] |
20,774,250 | https://en.wikipedia.org/wiki/PRC2 | PRC2 (polycomb repressive complex 2) is one of the two classes of polycomb-group proteins or (PcG). The other component of this group of proteins is PRC1 (Polycomb Repressive Complex 1).
This complex has histone methyltransferase activity and primarily methylates histone H3 on lysine 27 (i.e. H3K27me3), a mark of transcriptionally silent chromatin. PRC2 is required for initial targeting of genomic region (PRC Response Elements or PRE) to be silenced, while PRC1 is required for stabilizing this silencing and underlies cellular memory of silenced region after cellular differentiation. PRC1 also mono-ubiquitinates histone H2A on lysine 119 (H2AK119Ub1). These proteins are required for long term epigenetic silencing of chromatin and have an important role in stem cell differentiation and early embryonic development. PRC2 are present in most multicellular organisms.
The mouse PRC2 has four subunits: Suz12 (zinc finger), Eed, Ezh1 or Ezh2 (SET domain with histone methyltransferase activity) and Rbbp4 (histone binding domain). PRC2 can bind to H3K27me3 and repress neighboring nucleosomes, thus spreading the repression.
PRC2 has a role in X chromosome inactivation, in maintenance of stem cell fate, and in imprinting. Aberrant expression of PRC2 has been observed in cancer. Both loss and gain-of-function mutations in PRC2 components have been identified in various human cancers, suggesting complex roles of these components in malignancy.
Polycomb group genes directly and indirectly regulate the DNA damage response which acts as an anti-cancer barrier. The PRC2 complex appears to be present at sites of DNA double-strand breaks where it promotes repair of such breaks by non-homologous end joining.
The PRC2 is evolutionarily conserved, and has been found in mammals, insects, fungi, and plants.
In plants
In Arabidopsis thaliana, a plant model organism, several variants of the core subunits have been identified. Homologs of the Suz12 subunit are: Embryonic flower 2 (EMF2), reduced vernalization response 2 (VRN2), fertilization independent seed 2 (FIS2). There is one Eed homolog, fertilization independent endosperm (FIE). three Ezh1/Ezh2 homologs, curly leaf (CLF), swinger (SWN), medea (MEA), and one Rbbp4 homolog, multicopy suppressor of IRA1 (MSI1). Many other accessory components of PRC2 complex in Arabidopsis have been identified.
See also
Epigenetics
References
Proteins
Articles containing video clips
ru:Белки группы polycomb | PRC2 | [
"Chemistry"
] | 630 | [
"Proteins",
"Biomolecules by chemical classification",
"Molecular biology"
] |
20,775,637 | https://en.wikipedia.org/wiki/C17H36 | The molecular formula C17H36 (molar mass: 240.27 g/mol, exact mass: 240.2817 u) may refer to:
3,3-Di-tert-butyl-2,2,4,4-tetramethylpentane
Heptadecane
Molecular formulas | C17H36 | [
"Physics",
"Chemistry"
] | 69 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
22,236,319 | https://en.wikipedia.org/wiki/Quantum-confined%20Stark%20effect | The quantum-confined Stark effect (QCSE) describes the effect of an external electric field upon the light absorption spectrum or emission spectrum of a quantum well (QW). In the absence of an external electric field, electrons and holes within the quantum well may only occupy states within a discrete set of energy subbands. Only a discrete set of frequencies of light may be absorbed or emitted by the system. When an external electric field is applied, the electron states shift to lower energies, while the hole states shift to higher energies. This reduces the permitted light absorption or emission frequencies. Additionally, the external electric field shifts electrons and holes to opposite sides of the well, decreasing the overlap integral, which in turn reduces the recombination efficiency (i.e. fluorescence quantum yield) of the system.
The spatial separation between the electrons and holes is limited by the presence of the potential barriers around the quantum well, meaning that excitons are able to exist in the system even under the influence of an electric field. The quantum-confined Stark effect is used in QCSE optical modulators, which allow optical communications signals to be switched on and off rapidly.
Even if Quantum Objects (Wells, Dots or Discs, for instance) emit and absorb light generally with higher energies than the band gap of the material, the QCSE may shift the energy to values lower than the gap. This was evidenced recently in the study of quantum discs embedded in a nanowire.
Theoretical description
The shift in absorption lines can be calculated by comparing the energy levels in unbiased and biased quantum wells. It is a simpler task to find the energy levels in the unbiased system, due to its symmetry. If the external electric field is small, it can be treated as a perturbation to the unbiased system and its approximate effect can be found using perturbation theory.
Unbiased system
The potential for a quantum well may be written as
,
where is the width of the well and is the height of the potential barriers. The bound states in the well lie at a set of discrete energies, and the associated wavefunctions can be written using the envelope function approximation as follows:
In this expression, is the cross-sectional area of the system, perpendicular to the quantization direction, is a periodic Bloch function for the energy band edge in the bulk semiconductor and is a slowly varying envelope function for the system.
If the quantum well is very deep, it can be approximated by the particle in a box model, in which . Under this simplified model, analytical expressions for the bound state wavefunctions exist, with the form
The energies of the bound states are
where is the effective mass of an electron in a given semiconductor.
Biased system
Supposing the electric field is biased along the z direction,
the perturbing Hamiltonian term is
The first order correction to the energy levels is zero due to symmetry.
.
The second order correction is, for instance n=1,
for electron, where the additional approximation of neglecting the perturbation terms due to the bound states with k even and > 2 has been introduced. By comparison, the perturbation terms from odd-k states are zero due to symmetry.
Similar calculations can be applied to holes by replacing the electron effective mass with the hole effective mass . Introducing the total effective mass , the energy shift of the first optical transition induced by QCSE can be approximated to:
The downward shift in the confined energy level discussed in the above equation is referred to as the Franz-Keldysh effect.
The approximations made so far are quite crude, nonetheless the energy shift does show experimentally a square law dependence from the applied electric field, as predicted.
Absorption coefficient
Additionally to the redshift towards lower energies of the optical transitions, the DC electric field also induces a decrease in magnitude of the absorption coefficient, as it decreases the overlapping integrals of relating valence and conduction band wave functions. Given the approximations made so far and the absence of any applied electric field along z, the overlapping integral for transitions will be:
.
To calculate how this integral is modified by the quantum-confined Stark effect we once again employ time independent perturbation theory.
The first order correction for the wave function is
.
Once again we look at the energy level and consider only the perturbation from the level (notice that the perturbation from would be due to symmetry). We obtain
for the conduction and valence band respectively, where has been introduced as a normalization constant. For any applied electric field we obtain
.
Thus, according to Fermi's golden rule, which says that transition probability depends on the above overlapping integral, optical transition strength is weakened.
Excitons
The description of quantum-confined Stark effect given by second order perturbation theory is extremely simple and intuitive. However to correctly depict QCSE the role of excitons has to be taken into account. Excitons are quasiparticles consisting of a bound state of an electron-hole pair, whose binding energy in a bulk material can be modelled as that of an hydrogenic atom
where is the Rydberg constant, is the reduced mass of the electron-hole pair and is the relative electric permittivity.
The exciton binding energy has to be included in the energy balance of photon absorption processes:
.
Exciton generation therefore redshift the optical band gap towards lower energies.
If an electric field is applied to a bulk semiconductor, a further redshift in the absorption spectrum is observed due to Franz–Keldysh effect. Due to their opposite electric charges, the electron and the hole constituting the exciton will be pulled apart under the influence of the external electric field. If the field is strong enough
then excitons cease to exist in the bulk material. This somewhat limits the applicability of Franz-Keldysh for modulation purposes, as the redshift induced by the applied electric field is countered by shift towards higher energies due to the absence of exciton generations.
This problem does not exist in QCSE, as electrons and holes are confined in the quantum wells. As long as the quantum well depth is comparable to the excitonic Bohr radius, strong excitonic effects will be present no matter the magnitude of the applied electric field. Furthermore, quantum wells behave as two dimensional systems, which strongly enhance excitonic effects with respect to bulk material. In fact, solving the Schrödinger equation for a Coulomb potential in a two dimensional system yields an excitonic binding energy of
which is four times as high as the three dimensional case for the solution.
Optical modulation
Quantum-confined Stark effect's most promising application lies in its ability to perform optical modulation in the near infrared spectral range, which is of great interest for silicon photonics and down-scaling of optical interconnects.
A QCSE based electro-absorption modulator consists of a PIN structure where the instrinsic region contains multiple quantum wells and acts as a waveguide for the carrier signal. An electric field can be induced perpendicularly to the quantum wells by applying an external, reverse bias to the PIN diode, causing QCSE. This mechanism can be employed to modulate wavelengths below the band gap of the unbiased system and within the reach of the QCSE induced redshift.
Although first demonstrated in GaAs/AlxGa1-xAs quantum wells, QCSE started to generate interest after its demonstration in Ge/SiGe. Differently from III/V semiconductors, Ge/SiGe quantum well stacks can be epitaxially grown on top of a silicon substrate, provided the presence of some buffer layer in between the two. This is a decisive advantage as it allows Ge/SiGe QCSE to be integrated with CMOS technology and silicon photonics systems.
Germanium is an indirect gap semiconductor, with a bandgap of 0.66 eV. However it also has a relative minimum in the conduction band at the point, with a direct bandgap of 0.8 eV, which corresponds to a wavelength of 1550 nm. QCSE in Ge/SiGe quantum wells can therefore be used to modulate light at 1.55 , which is crucial for silicon photonics applications as 1.55 is the optical fiber`s transparency window and the most extensively employed wavelength for telecommunications.
By fine tuning material parameters such as quantum well depth, biaxial strain and silicon content in the well, it is also possible to tailor the optical band gap of the Ge/SiGe quantum well system to modulate at 1310 nm, which also corresponds to a transparency window for optical fibers.
Electro-optic modulation by QCSE using Ge/SiGe quantum wells has been demonstrated up to 23 GHz with energies per bit as low as 108 fJ. and integrated in a waveguide configuration on a SiGe waveguide
See also
Franz–Keldysh effect
Citations
General sources
Mark Fox, Optical properties of solids, Oxford, New York, 2001.
Hartmut Haug, Quantum Theory of the Optical and Electronic Properties of Semiconductors, World Scientific, 2004.
https://web.archive.org/web/20100728030241/http://www.rle.mit.edu/sclaser/6.973%20lecture%20notes/Lecture%2013c.pdf
Shun Lien Chuang, Physics of Photonics Devices, Wiley, 2009.
Quantum electronics
Quantum mechanics | Quantum-confined Stark effect | [
"Physics",
"Materials_science"
] | 1,934 | [
"Quantum electronics",
"Theoretical physics",
"Quantum mechanics",
"Condensed matter physics",
"Nanotechnology"
] |
22,238,135 | https://en.wikipedia.org/wiki/Oxo-Diels%E2%80%93Alder%20reaction | An oxo-Diels–Alder reaction (also called an oxa-Diels–Alder reaction) is an organic reaction and a variation of the Diels–Alder reaction in which a suitable diene reacts with an aldehyde to form a dihydropyran ring. This reaction is of some importance to synthetic organic chemistry.
The oxo-DA reaction was first reported in 1949 using a 2-methylpenta-1,3-diene and formaldehyde as reactants.
Asymmetric oxo-DA reactions (including catalytic reactions) are well known. Many strategies rely on coordinating a chiral Lewis acid to the carbonyl group.
See also
Aza-Diels–Alder reaction
References
Cycloadditions
Oxygen heterocycle forming reactions
Name reactions | Oxo-Diels–Alder reaction | [
"Chemistry"
] | 166 | [
"Name reactions"
] |
22,241,441 | https://en.wikipedia.org/wiki/Selenium%20cycle | The selenium cycle is a biological cycle of selenium similar to the cycles of carbon, nitrogen, and sulfur. Within the cycle, there are organisms which reduce the most oxidized form of the element and different organisms complete the cycle by oxidizing the reduced element to the initial state.
In the selenium cycle it has been found that bacteria, fungi, and plants, especially species of Astragalus, metabolize the most oxidized forms of selenium, selenate or selenite, to selenide. It is also thought that microorganisms may be able to oxidize selenium of valence zero to selenium of valence +6.
Evidence for a selenium cycle is found through the study of selenium accumulator plants. These plants are found in semi-arid, seleniferous soils. The plants biosynthesize forms of organic selenium compounds and release the compounds into the soil when they decay. If the compounds were not oxidized, then an increase in organic selenium would be seen, but selenium in these areas is mainly inorganic.
Aquatic ecosystems
There are three fates of dissolved selenium in an aquatic ecosystem: 1. it can be absorbed or ingested by organisms; 2. it can bind with suspended solids or sediments; or 3. it can remain in free solution. Over time, most of the selenium is taken in by organisms or bound to other solids. As the suspended material settles, the selenium accumulates in the top layer of sediment. Due to the dynamic flow in an aquatic ecosystem, selenium is usually only in the sediments temporarily before being cycled back into the system.
Immobilization processes
Selenium can be removed from the ecosystem and bound in sediment through natural processes of chemical and microbial reduction of the selenate form to the selenite form. The reduction is followed by adsorption to clay, reaction with iron species, and coprecipitation or settling. After selenium is in the sediment, other chemical and microbial reduction may occur, causing insoluble organic, mineral, elemental, or adsorbed selenium. Some organic forms may be released into the atmosphere from volatilization by chemical or microbial activity in the water and sediment or by direct release from plants. Immobilization processes effectively remove selenium from the ecosystem, especially in slow-moving or still-water areas.
Mobilization processes
Selenium is made available to the food chain through four oxidation and methylation processes. The first process is oxidation and methylation of inorganic and organic selenium by plant roots and microorganisms. The second process is biological mixing and associated oxidation of sediments from the burrowing of benthic invertebrates and feeding of fish and wildlife. The third process is represented by physical movement and chemical oxidation from water circulation and mixing, such as current, wind, precipitation, and upwelling. The fourth process is from oxidation by plant photosynthesis.
References
Biogeochemical cycle
Selenium | Selenium cycle | [
"Chemistry"
] | 636 | [
"Biogeochemical cycle",
"Biogeochemistry"
] |
22,242,751 | https://en.wikipedia.org/wiki/Photocatalytic%20water%20splitting | Photocatalytic water splitting is a process that uses photocatalysis for the dissociation of water (H2O) into hydrogen () and oxygen (). The inputs are light energy (photons), water, and a catalyst(s). The process is inspired by Photosynthesis, which converts water and carbon dioxide into oxygen and carbohydrates. Water splitting using solar radiation has not been commercialized. Photocatalytic water splitting is done by dispersing photocatalyst particles in water or depositing them on a substrate, unlike Photoelectrochemical cell, which are assembled into a cell with a photoelectrode. Hydrogen fuel production using water and light (photocatalytic water splitting), instead of petroleum, is an important renewable energy strategy.
Concepts
Two mole of is split into 1 mole and 2 mole using light in the process shown below.
A photon with an energy greater than 1.23 eV is needed to generate an electron–hole pairs, which react with water on the surface of the photocatalyst. The photocatalyst must have a bandgap large enough to split water; in practice, losses from material internal resistance and the overpotential of the water splitting reaction increase the required bandgap energy to 1.6–2.4 eV to drive water splitting.
The process of water-splitting is a highly endothermic process (ΔH > 0). Water splitting occurs naturally in photosynthesis when the energy of four photons is absorbed and converted into chemical energy through a complex biochemical pathway (Dolai's or Kok's S-state diagrams).
O–H bond homolysis in water requires energy of 6.5 - 6.9 eV (UV photon). Infrared light has sufficient energy to mediate water splitting because it technically has enough energy for the net reaction. However, it does not have enough energy to mediate the elementary reactions leading to the various intermediates involved in water splitting (this is why there is still water on Earth). Nature overcomes this challenge by absorbing four visible photons. In the laboratory, this challenge is typically overcome by coupling the hydrogen production reaction with a sacrificial reductant other than water.
Materials used in photocatalytic water splitting fulfill the band requirements and typically have dopants and/or co-catalysts added to optimize their performance. A sample semiconductor with the proper band structure is titanium dioxide () and is typically used with a co-catalyst such as platinum (Pt) to increase the rate of production. A major problem in photocatalytic water splitting is photocatalyst decomposition and corrosion.
Method of evaluation
Photocatalysts must conform to several key principles in order to be considered effective at water splitting. A key principle is that and evolution should occur in a stoichiometric 2:1 ratio; significant deviation could be due to a flaw in the experimental setup and/or a side reaction, neither of which indicate a reliable photocatalyst for water splitting. The prime measure of photocatalyst effectiveness is quantum yield (QY), which is:
QY (%) = (Photochemical reaction rate) / (Photon absorption rate) × 100%
To assist in comparison, the rate of gas evolution can also be used. A photocatalyst that has a high quantum yield and gives a high rate of gas evolution is a better catalyst.
The other important factor for a photocatalyst is the range of light that is effective for operation. For example, a photocatalyst is more desirable to use visible photons than UV photons.
Photocatalysts
The efficiency of solar-to-hydrogen (STH) of photocatalytic water splitting, however, has remained very low.
Gallium-indium nitride
A STH efficiency of 9.2% indium.
:La
:La yielded the highest water splitting rate of photocatalysts without using sacrificial reagents. This ultraviolet-based photocatalyst was reported to show water splitting rates of 9.7 mmol/h and a quantum yield of 56%. The nanostep structure of the material promotes water splitting as edges functioned as production sites and the grooves functioned as production sites. Addition of NiO particles as co-catalysts assisted in production; this step used an impregnation method with an aqueous solution of •6 and evaporated the solution in the presence of the photocatalyst. has a conduction band higher than that of NiO, so photo-generated electrons are more easily transferred to the conduction band of NiO for evolution.
is another catalyst solely activated by UV and above light. It does not have the performance or quantum yield of :La. However, it can split water without the assistance of co-catalysts and gives a quantum yield of 6.5%, along with a water splitting rate of 1.21 mmol/h. This ability is due to the pillared structure of the photocatalyst, which involves pillars connected by triangle units. Loading with NiO did not assist the photocatalyst due to the highly active evolution sites.
()()
()() had the highest quantum yield in visible light for visible light-based photocatalysts that do not utilize sacrificial reagents as of October 2008. The photocatalyst featured a quantum yield of 5.9% and a water splitting rate of 0.4 mmol/h. Tuning the catalyst was done by increasing calcination temperatures for the final step in synthesizing the catalyst. Temperatures up to 600 °C helped to reduce the number of defects, while temperatures above 700 °C destroyed the local structure around zinc atoms and were thus undesirable. The treatment ultimately reduced the amount of surface Zn and O defects, which normally function as recombination sites, thus limiting photocatalytic activity. The catalyst was then loaded with at a rate of 2.5 wt% Rh and 2 wt% Cr for better performance.
Molecular catalysts
Proton reduction catalysts based on earth-abundant elements carry out one side of the water-splitting half-reaction.
A mole of octahedral nickel(II) complex, [Ni(bztpen)]2+ (bztpen = N-benzyl-N,N’,N’-tris(pyridine-2-ylmethyl)ethylenediamine) produced 308,000 moles of hydrogen over 60 hours of electrolysis with an applied potential of -1.25 V vs. standard hydrogen electrode.
Ru(II) with three 2,2'-bipyridine ligands is a common compound for photosensitization used for photocatalytic oxidative transformations like water splitting. However, the bipyridine degrades due to the strongly oxidative conditions which causes the concentration of Ru(bpy)32+ to diminish. Measurements of the degradation is difficult with UV-Vis spectroscopy but MALDI MS can be used instead.
Cobalt-based photocatalysts have been reported, including tris(bipyridine) cobalt(II), compounds of cobalt ligated to certain cyclic polyamines, and some cobaloximes.
In 2014 researchers announced an approach that connected a chromophore to part of a larger organic ring that surrounded a cobalt atom. The process is less efficient than a platinum catalyst although cobalt is less expensive, potentially reducing costs. The process uses one of two supramolecular assemblies based on Co(II)-templated coordination of (bpy = 2,2′-bipyridyl) analogues as photosensitizers and electron donors to a cobaloxime macrocycle. The Co(II) centers of both assemblies are high spin, in contrast to most previously described cobaloximes. Transient absorption optical spectroscopies indicate that charge recombination occurs through multiple ligand states within the photosensitizer modules.
Bismuth vanadate
Bismuth vanadate is a visible-light-driven photocatalyst with a bandgap of 2.4 eV. BV have demonstrated efficiencies of 5.2% for flat thin films and 8.2% for core-shell WO3@BiVO4 nanorods with thin absorbers.
Bismuth oxides
Bismuth oxides are characterized by visible light absorption properties, just like vanadates.
Tungsten diselenide (WSe2)
Tungsten diselenide has photocatalytic properties that might be a key to more efficient electrolysis.
III-V semiconductor systems
Systems based on III-V semiconductors, such as InGaP, enable solar-to-hydrogen efficiencies of up to 14%. Challenges include long-term stability and cost.
2D semiconductor systems
2-dimensional semiconductors such as are actively researched as potential photocatalysts.
Aluminum‐based metal-organic frameworks
An aluminum‐based metal-organic framework made from 2‐aminoterephthalate can be modified by incorporating Ni2+ cations into the pores through coordination with the amino groups.Molybdenum disulfide
Porous organic polymers
Organic semiconductor photocatalysts, in particular porous organic polymers (POPs), attracted attention due to their low cost, low toxicity, and tunable light absorption vs inorganic counterparts. They display high porosity, low density, diverse composition, facile functionalization, high chemical/thermal stability, as well as high surface areas. Efficient conversion of hydrophobic polymers into hydrophilic polymer nano-dots (Pdots) increased polymer-water interfacial contact, which significantly improved performance.
Ansa-Titanocene(III/IV) Triflate Complexes
Beweries, et al., developed a light-driven "closed cycle of water splitting using ansa-titanocene(III/IV) triflate complexes".
Indium gallium nitride
An Indium gallium nitride (InxGa1-xN) photocatalyst achieved a solar-to-hydrogen efficiency of 9.2% from pure water and concentrated sunlight. The effiency is due to the synergistic effects of promoting hydrogen–oxygen evolution and inhibiting recombination by operating at an optimal reaction temperature (~70 degrees C), powered by harvesting previously wasted infrared light. An STH efficiency of about 7% was realized from tap water and seawater and efficiency of 6.2% in a larger-scale system with a solar light capacity of 257 watts.
Sacrificial reagents
Solid solutions with different Zn concentration (0.2 < x < 0.35) have been investigated in the production of hydrogen from aqueous solutions containing as sacrificial reagents under visible light. Textural, structural and surface catalyst properties were determined by adsorption isotherms, UV–vis spectroscopy, SEM and XRD and related to the activity results in hydrogen production from water splitting under visible light. It was reported that the crystallinity and energy band structure of the solid solutions depend on their Zn atomic concentration. The hydrogen production rate increased gradually as Zn concentration on photocatalysts increased from 0.2 to 0.3. The subsequent increase in the Zn fraction up to 0.35 reduced production. Variation in photoactivity was analyzed for changes in crystallinity, level of the conduction band and light absorption ability of solid solutions derived from their Zn atomic concentration.
Further reading
See also
Artificial photosynthesis
High-temperature electrolysis
Photochemical reduction of carbon dioxide
Electrolysis of water
Photoelectrolysis of water
References
Environmental chemistry
Hydrogen production
Photochemistry | Photocatalytic water splitting | [
"Chemistry",
"Environmental_science"
] | 2,440 | [
"Environmental chemistry",
"nan"
] |
22,245,276 | https://en.wikipedia.org/wiki/Chinese%20aircraft%20carrier%20programme | , the Chinese People's Liberation Army Navy (PLAN) has two active carriers, the and , with the third, , currently undergoing sea trials. A fourth carrier, currently called "Type 004" and featuring nuclear propulsion, might be under construction. Wang Yunfei, a retired PLA Navy officer and other naval experts projected in 2018/2019 that China might possess five or six aircraft carriers by the 2030s.
In the years after 1985 China acquired four retired aircraft carriers for study, namely, the British-built Australian and the ex-Soviet carriers , and . The Varyag later underwent an extensive refit to be converted into the , China's first operational aircraft carrier, which also served as a basis for China's subsequent design iterations. China's PLAN had had ambitions to develop and operate aircraft carriers since the 1970s.
History
Early ambitions
Since the 1970s, the PLAN has expressed interest in operating an aircraft carrier as part of its blue water aspirations.
To prepare the commanders needed for the future aircraft carriers, the Central Military Commission approved the program of training jet fighter pilots to be future captains in May 1987, and the Guangzhou Naval Academy was selected as the site.
Acquisition of HMAS Melbourne
China acquired the Royal Australian Navy's decommissioned light aircraft carrier in February 1985, when it was sold to the China United Shipbuilding Company to be towed to China and broken up for scrap. Prior to the ship's departure for China, the RAN stripped Melbourne of all electronic equipment and weapons, and welded her rudders into a fixed position so that she could not be reactivated. However, her steam catapult, arresting equipment and mirror landing system were not removed. At this time, few western experts expected that the Chinese government would attempt to develop aircraft carriers in the future. Melbourne finally arrived in China on 13 June.
The ship was not scrapped immediately; instead she was studied by Chinese naval architects and engineers. It is unclear whether the People's Liberation Army Navy (PLAN) orchestrated the acquisition of Melbourne or simply took advantage of the situation; Rear Admiral Zhang Zhaozhong, a staff member at the National Defence College, has stated that the Navy was unaware of the purchase until Melbourne first arrived at Guangzhou. Melbourne was the largest warship any of the Chinese experts had seen, and they were surprised by the amount of equipment which was still in place. The PLAN subsequently arranged for the ship's flight deck and all the equipment associated with flying operations to be removed so that they could be studied in depth. Reports have circulated that either a replica of the flight deck, or the deck itself, was used for training of People's Liberation Army Navy pilots in carrier flight operations.
Chinese engineers reverse-engineered a land-based replica of the steam catapult and landing system from that of , and a J-8IIG was used to conduct take-off and landing trials on the land-based flight deck in April 1987, which was not finally confirmed officially until 27 years later in April 2014 by CCTV-13. Both the take-off and landing were conducted on the same day, and the test pilot was PLANAF pilot Li Guoqiang. The experience gained would be later applied to the development of the Shenyang J-15.
It has also been claimed that the Royal Australian Navy received and "politely rejected" a request from the PLAN for blueprints of the ship's steam catapult. The carrier was not dismantled for many years; according to some rumours she was not completely broken up until 2002.
Other acquisition attempts
China also negotiated with Spain in an effort purchase the blueprints for proposed conventional take off/landing ships from Empresa Nacional Bazán, specifically 23,000-ton SAC-200 and 25,000-ton SAC-220 designs. Negotiations started between 1995 and 1996 but did not result in any purchase. However, the Spanish firm was paid several million US dollars in consulting fees, indicating the probable transfer of some design concepts.
China acquired the former Soviet s in 1995 and in 2000. Minsk, along with its sister ship , were initially sold to South Korea in 1995 to be scrapped, but due to objections from environmentalists, Minsk was resold to China in 1998 to be broken up there instead. Kiev, likewise, was sold to China in 2000 by Russia with a contractual requirement for it to be scrapped. However, neither ship was dismantled and both were instead converted into tourist attractions, with Minsk turned into a theme park and Kiev a luxury hotel.
In 1997, China attempted to purchase the retired French aircraft carrier , but negotiations between China and France failed.
Liaoning (Type 001)
The 67,500 ton ex-Soviet aircraft carrier Varyag (), which was only 68% completed and floating in Ukraine, was purchased through a private Macau tourist venture in 1998. Following her troublesome tow to Dalian shipyard, the carrier underwent a long refit. Varyag had been stripped of any military equipment as well as her propulsion systems prior to being put up for sale. In 2007 there were news reports that she was being fitted-out to enter service.
In 2011, People's Liberation Army Chief of the General Staff Chen Bingde confirmed that China was constructing at least one aircraft carrier. On 10 August 2011, it was announced that the refurbishment of Varyag was complete, and that it was undergoing sea trials.
On 14 December 2011, DigitalGlobe, an American satellite imaging company, announced that while scouring through pictures taken 8 December, it had discovered the retrofitted Varyag undergoing trials, DigitalGlobe further stated that their images captured the ship in the Yellow Sea where it operated for 5 days.
In September 2012, it was announced that this carrier would be named Liaoning, after Liaoning Province of China. In September 2012, China's first aircraft carrier, Liaoning, was commissioned. On 23 September 2012, Liaoning was handed over to the People's Liberation Army Navy, but was not yet in active service.
In November 2012, the first landing was successfully conducted on Liaoning with Shenyang J-15.
Four years later, in November 2016, it was reported that Liaoning was combat ready. China has confirmed that it is constructing a second carrier that will be built entirely with indigenous Chinese designs. Similar to Liaoning, China's second carrier will also use a ski jump for takeoff.
Current status
In mid-2007, Chinese domestic sources revealed that China had purchased a total of four sets of aircraft carrier landing systems from Russia and this was confirmed by Russian manufacturers. However, experts disagreed on the usage of these systems: while some have claimed that it is a clear evidence of the construction of an aircraft carrier, others claim these systems are used to train pilots for a future ship. Reports initially claimed that up to two carriers based on the Varyag would be started by 2015.
According to the Nippon News Network (NNN), research and development on the planned carriers is being carried out at a military research facility in Wuhan. NNN states that the actual carriers will be constructed at Jiangnan Shipyard in Shanghai. Kanwa Intelligence Review reports that the second carrier to be constructed will likely be assigned to Qingdao.
According to a February 2011 report in The Daily Telegraph, the Chinese military has constructed a concrete aircraft carrier flight deck to use for training carrier pilots and carrier operations personnel. The deck was constructed on top of a government building near Wuhan (Wuhan Technical College of Communication campus next to Huangjiahu). On 7 June 2011, People's Liberation Army Chief of the General Staff Chen Bingde confirmed that China was constructing its own aircraft carrier. He stated he would provide no further details until it was complete.
On 30 July 2011, a senior researcher of the Academy of Military Sciences said China needed at least three aircraft carriers. "If we consider our neighbours, India will have three aircraft carriers by 2014 and Japan will have three carriers by 2014, so I think the number (for China) should not be less than three so we can defend our rights and our maritime interests effectively." General Luo Yuan. In July 2011, a Chinese official announced that two aircraft carriers were being built at the Jiangnan Shipyard in Shanghai. On 21 May 2012, Taiwan's intelligence chief Tsai Teh-sheng told the Legislative Yuan that the PLA Navy plans to build two carriers, scheduled to start construction in 2013 and 2015 and launch in 2020 and 2022 respectively. The price of the two vessels is estimated to be worth US$9 Billion dollars. On 24 April 2013, Chinese Rear Admiral Song Xue confirmed that China will build more carriers and these will be larger and will carry more fighter-planes than Liaoning.
Shandong (Type 002)
The Type 002, or Shandong, is China's first domestically produced aircraft carrier. Construction began in November 2013 at the Dalian Shipyard and the ship was launched on 26 April 2017. After being fitted out, China's first domestically produced aircraft carrier underwent nine sea trials over the course of 18 months, starting from May 2018. The ship was formally commissioned into service on 19 December 2019 as the Shandong, with pennant number "17".
The 002 is a conventionally powered ski jump carrier with a displacement of around 70,000 tonnes. The ship is derived from the Liaoning. It uses conventional steam turbines with diesel generators as propulsion. The Shandong is a significant improvement over the Soviet-built Liaoning. For example, the Shandong carrier's ski-jump has an angle of 12.0°, an angle ideal for launching the Shenyang J-15 fighter, instead of the 14.0° on the Liaoning. Together with the enlarged hangar, the island (which has been made smaller by 10%), and extended on sponsons in the aft-starboard quarter, space has been freed up allowing for up to eight more aircraft and helicopters to be carried. The island includes a second glazed deck which permits the bridge and flight control areas to be separate, creating greater operational efficiency. It also features a faceted upper area of four active electronically scanned arrays (AESAs) for the S-band Type 346 radar.
Fujian (Type 003)
The third aircraft carrier, known as Fujian, is an entirely different design than Liaoning and Shandong. It is the largest of China's current fleet. The Type 003 has a displacement of over 80,000 tonnes and is smaller in size than the US Navy's Ford Class ships.
In 2015, media reports stated that both an electromagnetic catapult and a steam-powered catapult were constructed at the Huangdicun naval base for testing; this is thought to indicate that the Type 003 class as well as future PLAN carriers could possibly be CATOBAR carriers.
The construction of the first Type 003 class aircraft carrier started in February 2017. Satellite observation at the Jiangnan Shipyard in Shanghai only showed one carrier of this type under construction.
It was reported in early 2021 that the first ship would be launched in the same year, with construction already started on a second ship in this class. On 10 November 2021, Bloomberg reported that "China is three to six months away from launching its third aircraft carrier", citing a report by the Center for Strategic and International Studies. On June 17, 2022, the Type 003, now named Fujian was officially completed and launched. In January 2024, the Fujian was carrying out mooring tests in preparation for its maiden voyage.
On 1 May 2024, the Fujian officially commenced its first sea trials.
Type 004
The Type 004 is planned to be larger than the Type 003, and also to feature nuclear propulsion. It is claimed that construction started in December 2017 at Jiangnan Shipyard.
Development of carrier-based aircraft
China initially, in the 2000s, intended to acquire Russian Sukhoi Su-33 carrier-based aircraft to be operated from its aircraft carriers. However, China later, starting in 2006, developed the Shenyang J-15 as a derivative of the Su-33, featuring Chinese technology and avionics from the J-11B program. On 25 November 2012, it was announced that at least two Shenyang J-15s had successfully landed on Liaoning. The pilot credited with having achieved the first landing was Dai Mingmeng.
According to Chinese media reports, the J-15 cannot take off from Liaoning with a full fuel and munition load, being unable to get off the carrier's ski jump-ramp if the payload exceeds . In a follow-up review by Rick Joe of the Diplomat in 2021, he argued that the source of this Chinese media was unreliable and proved that the J-15 was able to take off with MTOW when the speed of the carrier was taken into calculation.
The Shenyang FC-31 (also possibly called J-35 in military use) is an in-development medium-sized fifth-generation stealth fighter that may in future be adopted for carrier use. The South China Morning Post reported on 6 July 2018 that China is developing an upgraded variant of the FC-31 as an alternate carrier operational jet. The FC-31 may enter its production phase, and military service, in 2026.
The Xi'an KJ-600 is an in-development high-straight wing AEW&C aircraft suspected to be fitted with an AESA-type radome system, and current non-flying mock-up model has a striking external resemblance to the aftward-folding Northrop Grumman E-2 Hawkeye, a carrier-based AEW&C aircraft serving the United States Navy. The design is likely to be a case of form following function, as the cancelled Soviet Yak-44 shared the same layout. Rick Joe, who writes extensively on Chinese aviation and naval developments for The Diplomat, commented that "fixed-wing carrier-borne AEW&C are a vital and essential part to any navy that seeks to field a robust and capable carrier airwing, and their ability to enhance a carrier group's offensive and defensive capabilities and overall situational awareness and network-centric warfare is unmatched by any other platform type that will exist in the near future."
Analysts H. I. Sutton believed the KJ-600 will be a massive boost to the Chinese Navy, and "once it enters service on the carriers, it will greatly enhance the aerial and maritime situational awareness, and the offensive and defensive capabilities of the carrier group", and that "Chinese aerospace and military industry has certainly shown its ability to develop quite modern and capable AEW&C systems for other air, naval and ground applications".
List of carriers
See also
List of aircraft carriers in service
DF-21D – Chinese anti-ship ballistic missile
References
External links
China / Aircraft Carrier Project, GlobalSecurity.org
Varyag: The Mysterious Journey from Ukraine to China, Varyag.com
Rick Joe, A Mid-2019 Guide to Chinese Aircraft Carriers: What is the future trajectory of the Chinese People's Liberation Army Navy carrier program?, "The Diplomat," 18 June 2019. Varying use of "“001A” and “002” designations
Proposed aircraft carriers
Projects established in 2015
2015 in China | Chinese aircraft carrier programme | [
"Engineering"
] | 3,074 | [
"Military projects",
"Proposed aircraft carriers"
] |
22,247,480 | https://en.wikipedia.org/wiki/Heptaquark | In particle physics, heptaquarks are a family of hypothetical composite particles, each consisting of seven quarks or antiquarks of any flavours.
Properties
One model predicts that the lowest-energy heptaquark state would be a spin-1/2 or spin-3/2 state of energy roughly 2.5 GeV. Another study found that the most stable heptaquark would include three strange quarks and two strange antiquarks.
See also
Exotic baryon
References
Baryons
Hypothetical composite particles | Heptaquark | [
"Physics"
] | 110 | [
"Particle physics stubs",
"Particle physics"
] |
22,249,204 | https://en.wikipedia.org/wiki/Franz%20Hillenkamp | Franz Hillenkamp (March 18, 1936 – August 22, 2014) was a German scientist known for his development of the laser microprobe mass analyzer and, with Michael Karas, matrix-assisted laser desorption/ionization (MALDI).
Early life and education
Franz Hillenkamp was born in 1936 in Essen, Germany. He attended high school in Lünen, graduating in 1955. He received a M.S. degree in electrical engineering from Purdue University in 1961. He received a Ph.D. (Dr.-Ing.) from the Technische Universität München in 1966 with a thesis entitled “An Absolutely Calibrated Calorimeter for the Measurement of Pulsed Laser Radiation.”
Academic career
Hillenkamp was a professor at Goethe University Frankfurt in Frankfurt from 1982 to 1986. In 1986, he became a professor on the Medical Faculty of the University of Münster where he remained until his retirement in 2001.
Laser microprobe
In 1973, Hillenkamp developed a high performance laser microprobe mass spectrometer with a spatial resolution of 0.5 μm and sub-attogram limit of detection for lithium atoms. This instrument was commercialized as the LAMMA 500 and was one of the first laser desorption mass spectrometers to be used for mass spectrometry imaging of tissue. The later LAMMA 1000 was also based on a Hillenkamp design.
MALDI
In 1985, Hillenkamp and his colleague Michael Karas used a LAMMA 1000 mass spectrometer to demonstrate the technique of matrix-assisted laser desorption/ionization (MALDI). MALDI is an ionization method used in mass spectrometry, allowing the analysis of large biopolymers. Although Karas and Hillenkamp were the first to discover MALDI, Japanese engineer Koichi Tanaka was the first to use a similar method in 1988 to ionize proteins and shared the Nobel Prize in Chemistry in 2002 for that work. Karas and Hillenkamp reported MALDI of proteins a few months later. The MALDI method of Karas and Hillenkamp subsequently became the much more widely used method.
Awards
In 1997, Hillenkamp and Karas were awarded the American Society for Mass Spectrometry Distinguished Contribution in Mass Spectrometry award for their discovery of MALDI. Hillenkamp and Karas received the Karl Heinz Beckurts Award, Germany's most important award for outstanding promotion of the partnership between science and industry, in 2003. Hillenkamp received the Thomson Medal from the International Mass Spectrometry Foundation in 2003.
SPIE, the international society for optics and photonics created a postdoctoral fellowship in honor of Franz Hillenkamp. The SPIE-Franz Hillenkamp Postdoctoral Fellowship in Problem-Driven Biomedical Optics and Analytics offers an annual grant of US $75,000. This fellowship aims to facilitate the translation of cutting-edge biomedical optics and biophotonics technologies into practical applications within clinical settings, ultimately contributing to advancements in human healthcare.
See also
History of mass spectrometry
References
External links
Franz Hillenkamp on masspec.scripps.edu
Franz Hillenkamp biography on Munster University website
Mass spectrometrists
Scientists from North Rhine-Westphalia
20th-century German chemists
1936 births
2014 deaths
Thomson Medal recipients
People from Essen | Franz Hillenkamp | [
"Physics",
"Chemistry"
] | 701 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
22,250,489 | https://en.wikipedia.org/wiki/Berkeley%20Center%20for%20New%20Media | The Berkeley Center for New Media (BCNM) is a research, teaching, and public events program at UC Berkeley. Its mission is to critically analyze and help shape developments in new media from cross-disciplinary and global perspectives that emphasize humanities and the public interest. Founded in 2004 by Linda Williams, Ken Goldberg, Greg Niemeyer, Whitney Davis, and Cathy Koshland, the organization seeks to study new media from three disciplinary perspectives, the humanities, the arts, and technology. BCNM awards Designated Emphasis Degrees in New Media and Masters Certificates to graduate students and Undergraduate Certificates to undergraduate students at UC Berkeley.
BCNM's spaces are shared between Sutardja Dai Hall and the Moffitt Undergraduate Library.
The BCNM seeks to highlight and critically examine the opportunities and risks associated with new media, and to consider how they can constructively benefit education, political engagement, privacy, and aesthetic experience.
The BCNM serves as a focal point for unconventional historical and contemporary thinking from a diverse community of over 120 affiliated faculty, advisors, and scholars from over 35 UC Berkeley departments, including architecture, philosophy, film studies, art history, performance studies, music, the schools of engineering, information, journalism, law, and the Berkeley Art Museum.
The BCNM catalyzes research and educates future leaders. The BCNM presents courses, symposia and special events for students, researchers, industry, and the public to seek out, consider, and develop innovative theories of contemporary new media. It offers a special program for UC Berkeley PhD students and has established new cross-disciplinary faculty positions. The BCNM facilitates traditional modes of scholarship, hosts critical dialogues, and encourages unorthodox artworks, designs, and experiments.
Notable faculty
Nicholas de Monchaux: BCNM Director and Associate Professor of Architecture and Urban Design
Ken Goldberg
Abigail De Kosnik
Notable alumni
Trevor Paglen: PhD in Geography - notable space artist
Programs
The Art, Technology, and Culture Colloquium is a forum for presenting new ideas that challenge conventional wisdom about technology and culture. BCNM frequently hosts the ATC Lecture Series. This series, free of charge and open to the public, presents artists, writers, curators, and scholars who consider contemporary issues at the intersection of aesthetic expression, emerging technologies, and cultural history, from a critical perspective.
The History and Theory of New Media lecture series brings to campus leading humanities scholars working on issues of media transition and technological emergence. The series promotes new, interdisciplinary approaches to questions about the uses, meanings, causes, and effects of rapid or dramatic shifts in techno-infrastructure, information management, and forms of mediated expression. Presented by the Berkeley Center for New Media, these events are free and open to the public.
The Commons Conversations: Technology and Public Life in Changing Times series is a discussion series on the impact of new media on our current political and public climate.
Design Futures lecture series has been discontinued
See also
Howison Lectures
Tarski Lectures
External links
Official website
Design Futures
UC Berkeley
University of California, Berkeley
2004 establishments in California
Organizations established in 2004
New media
Media studies
University and college lecture series | Berkeley Center for New Media | [
"Technology"
] | 639 | [
"Multimedia",
"New media"
] |
22,250,817 | https://en.wikipedia.org/wiki/ESD%20simulator | An ESD simulator, also known as an ESD gun, is a handheld unit used to test the immunity of devices to electrostatic discharge (ESD). These simulators are used in special electromagnetic compatibility (EMC) laboratories. ESD pulses are fast, high-voltage pulses created when two objects with different electrical charges come into close proximity or contact. Recreating them in a test environment helps to verify that the device under test is immune to static electricity discharges.
ESD testing is necessary to receive a CE mark, and for most suppliers of components for motor vehicles as part of required electromagnetic compatibility testing. It is often useful to automate these tests to eliminate the human factor.
There are three distinct test models for electrostatic discharge: human-body, machine, and charged-devices models. The human-body model emulates the action of a human body discharging static electricity, the machine model simulates static discharge from a machine, and the charged-device model simulates the charging and discharging events that occur in production processes and equipment.
Many ESD guns have interchangeable modules containing different discharge Networks or RC Modules (Specific resistance and capacitance values) to simulate different discharges. These modules typically slide into the handle of the pistol portion of the ESD simulator, much like loading some handguns. They change the characteristics of the waveshape discharged from the pistol and are called out in general standards like IEC 61000-4-2, SAE J113 and industry specific standards like ISO 10605. Resistance is referred to in ohms (Ω), capacitance is referred to in picofarad (pF or "puff"). The most commonly used discharge network is for IEC 61000-4-2 and ISO 10605, expressed as 150pF/330Ω. There are over 50 combinations of resistance and capacitance depending on the standards and the applicable electronics.
Test standards
Standards that require ESD testing include:
ISO 10605
Ford EMC
ISO/EN 61000-4-2 needed for the CE mark
IEC 61000-4-2
ISO TR10605
MIL-STD-883
MIL-STD-1512
GR-78-CORE
RTCA/DO 160
References
Hardware testing
Electronic engineering
Electromagnetic compatibility
de:ESD-Simulationsmodelle | ESD simulator | [
"Technology",
"Engineering"
] | 482 | [
"Electromagnetic compatibility",
"Radio electronics",
"Computer engineering",
"Electronic engineering",
"Electrical engineering"
] |
12,799,505 | https://en.wikipedia.org/wiki/GYRO | GYRO is a computational plasma physics code developed and maintained at General Atomics. It solves the 5-D coupled gyrokinetic-Maxwell equations using a combination of finite difference, finite element and spectral methods. Given plasma equilibrium data, GYRO can determine the rate of turbulent transport of particles, momentum and energy.
External links
GYRO Homepage at General Atomics
Computational physics
Physics software
Plasma theory and modeling
Tokamaks | GYRO | [
"Physics"
] | 92 | [
"Plasma physics",
"Computational physics",
"Plasma physics stubs",
"Plasma theory and modeling",
"Computational physics stubs",
"Physics software"
] |
12,800,585 | https://en.wikipedia.org/wiki/Tinyatoxin | Tinyatoxin (TTX or TTN) is an analog of the neurotoxin resiniferatoxin. It occurs naturally in Euphorbia poissonii.
It is a neurotoxin that acts via full agonism of the vanilloid receptors of sensory nerves. Tinyatoxin has a potential for pharmaceutical uses similar to uses of capsaicin. Tinyatoxin is about one third as strong as resiniferatoxin but is still an ultrapotent analogue of capsaicin, with a heat intensity estimate of 300 to 350 times that of capsaicin.
References
Plant toxins
Terpenes and terpenoids
Carboxylate esters
Orthoesters
Ion channel toxins
Benzyl compounds | Tinyatoxin | [
"Chemistry"
] | 149 | [
"Biomolecules by chemical classification",
"Chemical ecology",
"Natural products",
"Plant toxins",
"Organic compounds",
"Terpenes and terpenoids",
"Organic compound stubs",
"Organic chemistry stubs"
] |
12,801,033 | https://en.wikipedia.org/wiki/Film-forming%20agent | Film-forming agents are a group of chemicals that leave a pliable, cohesive, and continuous covering over the hair or skin when applied to their surface. This film has strong hydrophilic properties and leaves a smooth feel on skin.
Film-forming agents include polyvinylpyrrolidone (PVP), acrylates, acrylamides, and copolymers.
They are commonly found as ingredients of cosmetics, particular hair-care products, but also moisturizers and other skin-care products.
Side effects
Film-forming agents can be skin sensitizers for some individuals.
References
Hairdressing
Cosmetics chemicals
Fluid dynamics | Film-forming agent | [
"Chemistry",
"Engineering"
] | 134 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
12,804,558 | https://en.wikipedia.org/wiki/Biosynthesis%20of%20doxorubicin | Doxorubicin (DXR) is a 14-hydroxylated version of daunorubicin, the immediate precursor of DXR in its biosynthetic pathway. Daunorubicin is more abundantly found as a natural product because it is produced by a number of different wild type strains of Streptomyces. In contrast, only one known non-wild type species, Streptomyces peucetius subspecies caesius ATCC 27952, was initially found to be capable of producing the more widely used doxorubicin. This strain was created by Arcamone et al. in 1969 by mutating a strain producing daunorubicin, but not DXR, at least in detectable quantities. Subsequently, Hutchinson's group showed that under special environmental conditions, or by the introduction of genetic modifications, other strains of streptomyces can produce doxorubicin. His group has also cloned many of the genes required for DXR production, although not all of them have been fully characterized. In 1996, Strohl's group discovered, isolated and characterized dox A, the gene encoding the enzyme that converts daunorubicin into DXR. By 1999, they produced recombinant Dox A, a Cytochrome P450 oxidase, and found that it catalyzes multiple steps in DXR biosynthesis, including steps leading to daunorubicin. This was significant because it became clear that all daunorubicin producing strains have the necessary genes to produce DXR, the much more therapeutically important of the two. Hutchinson's group went on to develop methods to improve the yield of DXR, from the fermentation process used in its commercial production, not only by introducing Dox A encoding plasmids, but also by introducing mutations to deactivate enzymes that shunt DXR precursors to less useful products, for example baumycin-like glycosides. Some triple mutants, that also over-expressed Dox A, were able to double the yield of DXR. This is of more than academic interest because at that time DXR cost about $1.37 million per kg and current production in 1999 was 225 kg per annum. More efficient production techniques have brought the price down to $1.1 million per kg for the non-liposomal formulation. Although DXR can be produced semi-synthetically from daunorubicin, the process involves electrophilic bromination and multiple steps and the yield is poor. Since daunorubicin is produced by fermentation, it would be ideal if the bacteria could complete DXR synthesis more effectively.
Overview
The anthracycline skeleton of doxorubicin (DXR) is produced by a Type II polyketide synthase (PKS) in streptomyces peucetius. First, a 21-carbon decaketide chain (Fig 1. (1)) is synthesized from a single 3-carbon propionyl group from propionyl-CoA, and 9 2-carbon units derived from 9 sequential (iterative) decarboxylative condensations of malonyl-CoA. Each malonyl-CoA unit contributes a 2-carbon ketide unit to the growing polyketide chain. Each addition is catalyzed by the "minimal PKS" consisting of an acyl carrier protein (ACP), a ketosynthase (KS)/chain length factor (CLF) heterodimer and a malonyl-Coa:ACP acyltransferase(MAT). (refer to top of Figure 10.
This process is very similar to fatty acid synthesis, by fatty acid synthases and to Type I polyketide synthesis. But, in contrast to fatty acid synthesis, the keto groups of the growing polyketide chain are not modified during chain elongation and they are not usually fully reduced. In contrast to Type I PKS systems, the synthetic enzymes (KS, CLF, ACP and AT) are not attached covalently to each other, and may not even remain associated during each step of the polyketide chain synthesis.
After the 21-carbon decaketide chain of DXR is completed, successive modifications are made to eventually produce a tetracyclic anthracycline aglycone (without glycoside attached). The daunosamine amino sugar, activated by addition of Thiamine diphosphate (TDP), is created in another series of reactions. It is joined to the anthracycline aglycone and further modifications are done to produce first daunorubicin then DXR.
There are at least 3 gene clusters important to DXR biosynthesis: dps genes which specify the enzymes required for the linear polyketide chain synthesis and its first cyclizations, the dnr cluster is responsible for the remaining modifications of the anthracycline structure and the dnm genes involved in the amino sugar, daunosamine, synthesis. Additionally, there is a set of "self resistance" genes to reduce the toxic impact of the anthracycline on the producing organism. One mechanism is a membrane pump that causes efflux of the DXR out of the cell (drr loci). Since these complex molecules are only advantageous under specific conditions, and require a lot of energy to produce, their synthesis is tightly regulated.
Polyketide Chain Synthesis
Doxorubicin is synthesized by a specialized polyketide synthase.
The initial event in DXR synthesis is the selection of the propionyl-CoA starter unit and its decarboxylative addition to a two carbon ketide unit, derived from malonyl-CoA to produce the five carbon B-ketovaleryl ACP. The five carbon diketide is delivered by the ACP to the cysteine sulfhydryl group at the KS active site, by thioester exchange, and the ACP is released from the chain. The free ACP picks up another malonate group from malonyl-CoA, also by thioester exchange, with release of the CoA. The ACP brings the new malonate to the active site of the KS where is it decarboxylated, possibly with the help of the CLF subunit, and joined to produce a 7 carbon triketide, now anchored to the ACP (see top of Figure 1). Again the ACP hands the chain off to the KS subunit and the process is repeated iteratively until the decaketide is completed.
In most Type II systems the initiating event is delivery by ACP of an acetate unit, derived from acetyl-CoA, to the active site of the ketosynthase (KS) subunit of the KS/CLF heterodimer. The default mode for Type II PKS systems is the incorporation of acetate as the primer unit, and that holds true for the DXR "minimal PKS". In other words, the action of KS/CLF/ACP (Dps A, B and G) from this system will not produce 21-carbon decaketides, but 20-carbon decaketides instead, because acetate is the “preferred” starter. The process of specifying propionate is not completely understood, but it is clear that it depends on an additional protein, Dps C, which may be acting as a ketosynthase or acyltransferase selective for propionyl-CoA, and possibly Dps D makes a contribution.
A dedicated MAT has been found to be dispensable for polyketide production under in vitro conditions. The PKS may "borrow" the MAT from its own fatty acid synthase and this may be the primary way ACP receives its malonate group in DXR biosynthesis. Additionally, there is excellent evidence that "self-malonylation" is an inherent characteristic of Type II ACPs. In summary, a given Type II PKS may provide its own MAT (s), it may borrow one from FAS, or its ACP may “self-malonylate”.
It is unknown whether the same KS/CLF/ACP ternary complex chaperones the growth of a full-length polyketide chain through the entire catalytic cycle, or whether the ACP dissociates after each condensation reaction. A 2.0-Å resolution structure of the actinorhodin KS/CLF, which is very similar to the dps KS/CLF, shows polyketides being elongated inside an amphipathic tunnel formed at the interface of the KS and CLF subunits. The tunnel is about 17-Å long and one side has many charged amino acid residues which appear to be stabilizing the carbonyl groups of the chain, while the other side is hydrophobic. This structure explains why both subunits are necessary for chain elongation and how the reactive growing chain is protected from random spontaneous reactions until it is positioned properly for orderly cyclization. The structure also suggests a mechanism for chain length regulation. Amino acid side groups extend into the tunnel and act as "gates". A couple of particularly bulky residues may be impassable by the chain, causing termination. Modifications to tunnel residues based on this structure were able to alter the chain length of the final product. The final condensation causes the polyketide chain to "buckle" allowing an intramolecular attack by the C-12 methylene carbanion, generated by enzyme catalyzed proton removal and stabilized by electrostatic interactions in the tunnel, on the C-7 carbonyl (see 3 in Figure 1). This tunnel aided intramolecular aldol condensation provides the first cyclization when the chain is still in the tunnel. The same C-7/C-12 attack occurs in the biosynthesis of DXR, in a similar fashion.
Conversion to 12-deoxyalkalonic acid
The 21-carbon decaketide is converted to 12-deoxyalkalonic acid (5), the first free easily isolated intermediate in DXR biosynthesis, in 3 steps. These steps are catalyzed by the final 3 enzymes in the dps gene cluster and are considered part of the polyketide synthase.
While the decaketide is still associated with the KS/CLF heterodimer the 9-carbonyl group is reduced by Dps E, the 9-ketoreductase, using NADPH as the reducing agent/hydride donor. Dps F, the “1st ring cyclase” /aromatase, is very specific and is in the family of C-7/C-12 cyclases that require prior C-9 keto-reduction. These two reactions are felt to occur while the polyketide chain is still partially in the KS/CLF tunnel and it is not known what finally cleaves the chain from its covalent link to the KS or ACP. If the Dps F cyclase is inactivated by mutations or gene deletions, the chain will cyclize spontaneously in random fashion. Thus, Dps F is thought to “chaperone” or help fold the polyketide to ensure non-random cyclization, a reaction that is energetically favorable and leads to subsequent dehydration and resultant aromatization.
Next, Dps Y regioselectively promotes formation of the next two carbon-carbon bonds and then catalyzes dehydration leading to aromatization of one of the rings to give (5).
Conversion to ε-rhodomycinone
The next reactions are catalyzed by enzymes originating from the dnr gene cluster. Dnr G, a C-12 oxygenase (see (5) for numbering) introduces a keto group using molecular oxygen. It is an "anthrone type oxygenase", also called a quinone-forming monooxygenase, many of which are important 'tailoring enzymes' in the biosynthesis of several types of aromatic polyketide antibiotics. They have no cofactors: no flavins, metals or energy sources. Their mechanism is poorly understood but may involve a "protein radical".
Alkalonic acid (6), a quinone, is the product. Dnr C, alkalonic acid-O-methyltransferase methylates the carboxylic acid end of the molecule forming an ester, using S-adenosyl methionine (SAM) as the cofactor/methyl group donor. The product is alkalonic acid methyl ester (7). The methyl group is removed later, but it serves to activate the adjacent methylene bridge facilitating its attack on the terminal carbonyl group, a reaction catalyzed by DnrD.
Dnr D, the fourth ring cyclase (AAME cyclase), catalyzes an intramolecular aldol addition reaction. No cofactors are required and neither aromatization nor dehydration occurs. A simple base catalyzed mechanism is proposed. The product is aklaviketone (8).
Dnr H, aklaviketone reductase, stereospecifically reduces the 17-keto group of the new fourth ring to a 17-OH group to give aklavinone (9). This introduces a new chiral center and NADPH is a cofactor.
Dnr F, aklavinone-11-hydroxylase, is a FAD monooxygenase that uses NADPH to activate molecular oxygen for subsequent hydroxylation. ε-rhodomycinone (10) is the product.
Conversion to doxorubicin
Dnr S, daunosamine glycosyltransferase catalyzes the addition of the TDP activated glycoside, L-daunosamine-TDP to ε-rhodomycinone to give rhodomycin D (Figure 2). The release of TDP drives the reaction forward. The enzyme has sequence similarity to glycosyltransferases of the other "unusual sugars" added to Type II PKS aromatic products. Dnr P, rhodomycin D methylesterase, removes the methyl group added previously by DnrC. It initially served to activate the adjacent methylene bridge, and after that it prevented its carboxyl group from leaving the C-10 carbon (see Fig 2). Had the carboxyl group not been esterified prior to the fourth ring cyclization, its departure as would have been favored by the formation of a bicyclic aromatic system. After C-7 reduction and glycosylation, the C-8 methylene bridge is no longer activated for deprotonation, thereby making aromatization less likely. Note that the non-isolable intermediate, with numbering, is the 3rd molecule in Figure 2. The numbering system is very odd and a vestige of early nomenclature. The decarboxylation of the intermediate occurs spontaneously, or by the influence of Dnr P, giving 13-deoxycarminomycin.
A crystal structure, with bound products, of aclacinomycin methylesterase, an [enzyme] with 53% sequence homology to Dnr P, from streptomyces purpurascens, has been solved. It is able to catalyze the same reaction and uses a classic Ser-His-Asp catalytic triad with serine acting as the nucleophile and gly-met providing stabilization of the transition state by forming an "oxyanion hole". The active site amino acids are almost entirely the same as Dnr P, and the mechanism is almost certainly identical.
Although Dox A is shown next in the biosynthetic scheme (Figure 2), Dnr K, carminomycin 4-O-methyltransferase is able to O-methylate the 4-hydroxyl group of any of the glycosides in Figure 2. A 2.35 Å resolution crystal structure of the enzyme with bound products has recently been solved. The orientation of the products is consistent with a SN2 mechanism of methyl transfer. Site-directed mutagenesis of the potential acid/base residues in the active site did not affect catalysis leading to the conclusion that Dnr K most likely acts as an entropic enzyme in that rate enhancement is mainly due to orientational and proximity effects. This is in contrast to most other O-methyltransferases where acid/base catalysis has been demonstrated to be an essential contribution to rate enhancement.
Dox A catalyzes three successive oxidations in streptomyces peucetius. Deficient DXR production is not primarily due to low levels of or malfunctioning Dox A, but because there are many products diverted away from the pathway shown in Figure 2. Each of the glycosides is a potential target of shunt enzymes, not shown, some of which are products of the dnr gene cluster. Mutations of these enzymes does significantly boost DXR production. In addition, Dox A has a very low kcat/Km value for C-14 oxidation (130/M) compared to C-13 oxidation (up to 22,000/M for some substrates). Genetic manipulation to overexpress Dox A has also increased yields, particularly if the genes for the shunt enzymes are inactivated simultaneously.
Dox A is a cytochrome P-450 monooxygenase that has broad substrate specificity, catalyzing anthracycline hydroxylation at C-13 and C-14 ( Figure 2). The enzyme has an absolute requirement for molecular oxygen and NADPH. Initially, two successive oxidations are done at C-13, followed by a single oxidation of C-14 that converts daunorubicin to doxorubicin.
References
Topoisomerase inhibitors
Biosynthesis | Biosynthesis of doxorubicin | [
"Chemistry"
] | 3,835 | [
"Biosynthesis",
"Metabolism",
"Chemical synthesis"
] |
12,805,720 | https://en.wikipedia.org/wiki/CMN-GOMS | CMN-GOMS stands for Card, Moran and Newell GOMS. CMN-GOMS is the original version of the GOMS technique in human computer interaction. It takes the name after its creators Stuart Card, Thomas P. Moran and Allen Newell who first described GOMS in their 1983 book The Psychology of Human Computer Interaction.
Overview
This technique requires a strict goal-method-operation-selection rules structure. The structure is rigid enough that the evaluator represents the tasks in a pseudo-code format (no formal syntax is dictated). It also provides a guide for how to formulate selection rules. This method can also be used to estimate the load the task places on the user. For instance, examining the number of levels down the task-tree that a goal branch is can be used to estimate the memory demand the task places on the system. The process must remember information about all of the levels above the current branch.
This technique is more flexible than the Keystroke-Level Model (KLM) because the pseudo-code is in a general form. That is, it can be executed for different scenarios by going down different branches, while KLM's procedure is a simple list that has to be recreated for each different task.
Example of a simple goal
Deleting a file in Windows Explorer (NOTE: not all goals are fully expanded in this example).
GOAL: DELETE-FILE
. GOAL: SELECT-FILE
. . [select: GOAL: KEYBOARD-TAB-METHOD
. . GOAL: MOUSE-METHOD]
. . VERIFY-SELECTION
. GOAL: ISSUE-DELETE-COMMAND
. . [select*: GOAL: KEYBOARD-DELETE-METHOD
. . . PRESS-DELETE
. . . GOAL: CONFIRM-DELETE
. . GOAL: DROP-DOWN-MENU-METHOD
. . . MOVE-MOUSE-OVER-FILE-ICON
. . . CLICK-RIGHT-MOUSE-BUTTON
. . . LOCATE-DELETE-COMMAND
. . . MOVE-MOUSE-TO-DELETE-COMMAND
. . . CLICK-LEFT-MOUSE-BUTTON
. . . GOAL: CONFIRM-DELETE
. . GOAL: DRAG-AND-DROP-METHOD
. . . MOVE-MOUSE-OVER-FILE-ICON
. . . PRESS-LEFT-MOUSE-BUTTON
. . . LOCATE-RECYCLING-BIN
. . . MOVE-MOUSE-TO-RECYCLING-BIN
. . . RELEASE-LEFT-MOUSE-BUTTON]
*Selection rule for GOAL: ISSUE-DELETE-COMMAND
If hands are on keyboard, use KEYBOARD-DELETE-METHOD,
else if Recycle bin is visible, use DRAG-AND-DROP-METHOD,
else use DROP-DOWN-MENU-METHOD
See also
Human information processor model
CPM-GOMS
KLM-GOMS
NGOMSL
References
Notations
This article incorporates text from Dr. G. Abowd: GOMS Analysis Techniques - Final Essay, which has been released into GFDL by its author).
Footnotes
Human–computer interaction | CMN-GOMS | [
"Engineering"
] | 610 | [
"Human–computer interaction",
"Human–machine interaction"
] |
12,809,051 | https://en.wikipedia.org/wiki/UNIFAC | In statistical thermodynamics, the UNIFAC method (UNIQUAC Functional-group Activity Coefficients) is a semi-empirical system for the prediction of non-electrolyte activity in non-ideal mixtures. UNIFAC uses the functional groups present on the molecules that make up the liquid mixture to calculate activity coefficients. By using interactions for each of the functional groups present on the molecules, as well as some binary interaction coefficients, the activity of each of the solutions can be calculated. This information can be used to obtain information on liquid equilibria, which is useful in many thermodynamic calculations, such as chemical reactor design, and distillation calculations.
The UNIFAC model was first published in 1975 by Fredenslund, Jones and John Prausnitz, a group of chemical engineering researchers from the University of California. Subsequently they and other authors have published a wide range of UNIFAC papers, extending the capabilities of the model; this has been by the development of new or revision of existing UNIFAC model parameters. UNIFAC is an attempt by these researchers to provide a flexible liquid equilibria model for wider use in chemistry, the chemical and process engineering disciplines.
Introduction
A particular problem in the area of liquid-state thermodynamics is the sourcing of reliable thermodynamic constants. These constants are necessary for the successful prediction of the free energy state of the system; without this information it is impossible to model the equilibrium phases of the system.
Obtaining this free energy data is not a trivial problem, and requires careful experiments, such as calorimetry, to successfully measure the energy of the system. Even when this work is performed it is infeasible to attempt to conduct this work for every single possible class of chemicals, and the binary, or higher, mixtures thereof. To alleviate this problem, free energy prediction models, such as UNIFAC, are employed to predict the system's energy based on a few previously measured constants.
It is possible to calculate some of these parameters using ab initio methods like COSMO-RS, but results should be treated with caution, because ab initio predictions can be off. Similarly, UNIFAC can be off, and for both methods it is advisable to validate the energies obtained from these calculations experimentally.
UNIFAC correlation
The UNIFAC correlation attempts to break down the problem of predicting interactions between molecules by describing molecular interactions based upon the functional groups attached to the molecule. This is done in order to reduce the sheer number of binary interactions that would be needed to be measured to predict the state of the system.
Chemical activity
The activity coefficient of the components in a system is a correction factor that accounts for deviations of real systems from that of an Ideal solution, which can either be measured via experiment or estimated from chemical models (such as UNIFAC). By adding a correction factor, known as the activity (, the activity of the ith component) to the liquid phase fraction of a liquid mixture, some of the effects of the real solution can be accounted for. The activity of a real chemical is a function of the thermodynamic state of the system, i.e. temperature and pressure.
Equipped with the activity coefficients and a knowledge of the constituents and their relative amounts, phenomena such as phase separation and vapour-liquid equilibria can be calculated. UNIFAC attempts to be a general model for the successful prediction of activity coefficients.
Model parameters
The UNIFAC model splits up the activity coefficient for each species in the system into two components; a combinatorial and a residual component . For the -th molecule, the activity coefficients are broken down as per the following equation:
In the UNIFAC model, there are three main parameters required to determine the activity for each molecule in the system. Firstly there are the group surface area and volume contributions obtained from the Van der Waals surface area and volumes. These parameters depend purely upon the individual functional groups on the host molecules. Finally there is the binary interaction parameter , which is related to the interaction energy of molecular pairs (equation in "residual" section). These parameters must be obtained either through experiments, via data fitting or molecular simulation.
Combinatorial
The combinatorial component of the activity is contributed to by several terms in its equation (below), and is the same as for the UNIQUAC model.
where and are the molar weighted segment and area fractional components for the -th molecule in the total system and are defined by the following equation; is a compound parameter of , and . is the coordination number of the system, but the model is found to be relatively insensitive to its value and is frequently quoted as a constant having the value of 10.
and are calculated from the group surface area and volume contributions and (Usually obtained via tabulated values) as well as the number of occurrences of the functional group on each molecule such that:
Residual
The residual component of the activity is due to interactions between groups present in the system, with the original paper referring to the concept of a "solution-of-groups". The residual component of the activity for the -th molecule containing unique functional groups can be written as follows:
where is the activity of an isolated group in a solution consisting only of molecules of type . The formulation of the residual activity ensures that the condition for the limiting case of a single molecule in a pure component solution, the activity is equal to 1; as by the definition of , one finds that will be zero. The following formula is used for both and
In this formula is the summation of the area fraction of group , over all the different groups and is somewhat similar in form, but not the same as . is the group interaction parameter and is a measure of the interaction energy between groups. This is calculated using an Arrhenius equation (albeit with a pseudo-constant of value 1). is the group mole fraction, which is the number of groups in the solution divided by the total number of groups.
is the energy of interaction between groups m and n, with SI units of joules per mole and R is the ideal gas constant. Note that it is not the case that , giving rise to a non-reflexive parameter. The equation for the group interaction parameter can be simplified to the following:
Thus still represents the net energy of interaction between groups and , but has the somewhat unusual units of absolute temperature (SI kelvins). These interaction energy values are obtained from experimental data, and are usually tabulated.
See also
Chemical equilibrium
Chemical thermodynamics
Fugacity
UNIQUAC – UNIversal QUasi-chemical Activity Coefficients
UNIFAC Consortium
PSRK – Predictive Soave–Redlich–Kwong
MOSCED – Modified Separation of Cohesive Energy Density Model (Estimation of activity coefficients at infinite dilution)
References
Further reading
Aage Fredenslund, Jürgen Gmehling and Peter Rasmussen, Vapor-liquid equilibria using UNIFAC : a group contribution method, Elsevier Scientific, New York, 1979
External links
UNIFAC structural groups and parameters
AIOMFAC online-model UNIFAC-based group-contribution model for calculation of activity coefficients in organic–inorganic mixtures.
Thermodynamic models | UNIFAC | [
"Physics",
"Chemistry"
] | 1,490 | [
"Thermodynamic models",
"Thermodynamics"
] |
10,531,718 | https://en.wikipedia.org/wiki/Graph%20cuts%20in%20computer%20vision | As applied in the field of computer vision, graph cut optimization can be employed to efficiently solve a wide variety of low-level computer vision problems (early vision), such as image smoothing, the stereo correspondence problem, image segmentation, object co-segmentation, and many other computer vision problems that can be formulated in terms of energy minimization.
Many of these energy minimization problems can be approximated by solving a maximum flow problem in a graph (and thus, by the max-flow min-cut theorem, define a minimal cut of the graph).
Under most formulations of such problems in computer vision, the minimum energy solution corresponds to the maximum a posteriori estimate of a solution.
Although many computer vision algorithms involve cutting a graph (e.g., normalized cuts), the term "graph cuts" is applied specifically to those models which employ a max-flow/min-cut optimization (other graph cutting algorithms may be considered as graph partitioning algorithms).
"Binary" problems (such as denoising a binary image) can be solved exactly using this approach; problems where pixels can be labeled with more than two different labels (such as stereo correspondence, or denoising of a grayscale image) cannot be solved exactly, but solutions produced are usually near the global optimum.
History
The foundational theory of graph cuts was first applied in computer vision in the seminal paper by Greig, Porteous and Seheult of Durham University. Allan Seheult and Bruce Porteous were members of Durham's lauded statistics group of the time, led by Julian Besag and Peter Green, with the optimisation expert Margaret Greig notable as the first ever female member of staff of the Durham Mathematical Sciences Department.
In the Bayesian statistical context of smoothing noisy (or corrupted) images, they showed how the maximum a posteriori estimate of a binary image can be obtained exactly by maximizing the flow through an associated image network, involving the introduction of a source and sink. The problem was therefore shown to be efficiently solvable. Prior to this result, approximate techniques such as simulated annealing (as proposed by the Geman brothers), or iterated conditional modes (a type of greedy algorithm suggested by Julian Besag) were used to solve such image smoothing problems.
Although the general -colour problem is NP hard for the approach of Greig, Porteous and Seheult has turned out to have wide applicability in general computer vision problems. For general problems, Greig, Porteous and Seheult's approach is often applied iteratively to sequences of related binary problems, usually yielding near optimal solutions.
In 2011, C. Couprie et al. proposed a general image segmentation framework, called the "Power Watershed", that minimized a real-valued indicator function from [0,1] over a graph, constrained by user seeds (or unary terms) set to 0 or 1, in which the minimization of the indicator function over the graph is optimized with respect to an exponent . When , the Power Watershed is optimized by graph cuts, when the Power Watershed is optimized by shortest paths, is optimized by the random walker algorithm and is optimized by the watershed algorithm. In this way, the Power Watershed may be viewed as a generalization of graph cuts that provides a straightforward connection with other energy optimization segmentation/clustering algorithms.
Binary segmentation of images
Notation
Image:
Output: Segmentation (also called opacity) (soft segmentation). For hard segmentation
Energy function: where C is the color parameter and λ is the coherence parameter.
Optimization: The segmentation can be estimated as a global minimum over S:
Existing methods
Standard Graph cuts: optimize energy function over the segmentation (unknown S value).
Iterated Graph cuts:
First step optimizes over the color parameters using K-means.
Second step performs the usual graph cuts algorithm.
These 2 steps are repeated recursively until convergence.
Dynamic graph cuts:Allows to re-run the algorithm much faster after modifying the problem (e.g. after new seeds have been added by a user).
Energy function
where the energy is composed of two different models ( and ):
Likelihood / Color model / Regional term
— unary term describing the likelihood of each color.
This term can be modeled using different local (e.g. ) or global (e.g. histograms, GMMs, Adaboost likelihood) approaches that are described below.
Histogram
We use intensities of pixels marked as seeds to get histograms for object (foreground) and background intensity distributions: P(I|O) and P(I|B).
Then, we use these histograms to set the regional penalties as negative log-likelihoods.
GMM (Gaussian mixture model)
We usually use two distributions: one for background modelling and another for foreground pixels.
Use a Gaussian mixture model (with 5–8 components) to model those 2 distributions.
Goal: Try to pull apart those two distributions.
Texon
A (or ) is a set of pixels that has certain characteristics and is repeated in an image.
Steps:
Determine a good natural scale for the texture elements.
Compute non-parametric statistics of the model-interior , either on intensity or on Gabor filter responses.
Examples:
Deformable-model based Textured Object Segmentation
Contour and Texture Analysis for Image Segmentation
Prior / Coherence model / Boundary term
— binary term describing the coherence between neighborhood pixels.
In practice, pixels are defined as neighbors if they are adjacent either horizontally, vertically or diagonally (4 way connectivity or 8 way connectivity for 2D images).
Costs can be based on local intensity gradient, Laplacian zero-crossing, gradient direction, color mixture model,...
Different energy functions have been defined:
Standard Markov random field: Associate a penalty to disagreeing pixels by evaluating the difference between their segmentation label (crude measure of the length of the boundaries). See Boykov and Kolmogorov ICCV 2003
Conditional random field: If the color is very different, it might be a good place to put a boundary. See Lafferty et al. 2001; Kumar and Hebert 2003
Criticism
Graph cuts methods have become popular alternatives to the level set-based approaches for optimizing the location of a contour (see for an extensive comparison). However, graph cut approaches have been criticized in the literature for several issues:
Metrication artifacts: When an image is represented by a 4-connected lattice, graph cuts methods can exhibit unwanted "blockiness" artifacts. Various methods have been proposed for addressing this issue, such as using additional edges or by formulating the max-flow problem in continuous space.
Shrinking bias: Since graph cuts finds a minimum cut, the algorithm can be biased toward producing a small contour. For example, the algorithm is not well-suited for segmentation of thin objects like blood vessels (see for a proposed fix).
Multiple labels: Graph cuts is only able to find a global optimum for binary labeling (i.e., two labels) problems, such as foreground/background image segmentation. Extensions have been proposed that can find approximate solutions for multilabel graph cuts problems.
Memory: the memory usage of graph cuts increases quickly as the image size increases. As an illustration, the Boykov-Kolmogorov max-flow algorithm v2.2 allocates bytes ( and are respectively the number of nodes and edges in the graph). Nevertheless, some amount of work has been recently done in this direction for reducing the graphs before the maximum-flow computation.
Algorithm
Minimization is done using a standard minimum cut algorithm.
Due to the max-flow min-cut theorem we can solve energy minimization by maximizing the flow over the network. The max-flow problem consists of a directed graph with edges labeled with capacities, and there are two distinct nodes: the source and the sink. Intuitively, it is easy to see that the maximum flow is determined by the bottleneck.
Implementation (exact)
The Boykov-Kolmogorov algorithm is an efficient way to compute the max-flow for computer vision-related graphs.
Implementation (approximation)
The Sim Cut algorithm approximates the minimum graph cut. The algorithm implements a solution by simulation of an electrical network. This is the approach suggested by Cederbaum's maximum flow theorem. Acceleration of the algorithm is possible through parallel computing.
Software
http://pub.ist.ac.at/~vnk/software.html — An implementation of the maxflow algorithm described in "An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Computer Vision" by Vladimir Kolmogorov
http://vision.csd.uwo.ca/code/ — some graph cut libraries and MATLAB wrappers
http://gridcut.com/ — fast multi-core max-flow/min-cut solver optimized for grid-like graphs
http://virtualscalpel.com/ — An implementation of the Sim Cut; an algorithm for computing an approximate solution of the minimum s-t cut in a massively parallel manner.
References
Bayesian statistics
Computer vision
Computational problems in graph theory
Image segmentation | Graph cuts in computer vision | [
"Mathematics",
"Engineering"
] | 1,925 | [
"Computational problems in graph theory",
"Packaging machinery",
"Computational mathematics",
"Graph theory",
"Computational problems",
"Mathematical relations",
"Artificial intelligence engineering",
"Mathematical problems",
"Computer vision"
] |
10,533,506 | https://en.wikipedia.org/wiki/Water%20heat%20recycling | Water heat recycling (also known as drain water heat recovery, waste water heat recovery, greywater heat recovery, or sometimes shower water heat recovery) is the use of a heat exchanger to recover energy and reuse heat from drain water from various activities such as dishwashing, clothes washing and especially showers. The technology is used to reduce primary energy consumption for water heating.
How it works
The cold water that is put into a water heating device can be preheated using the reclaimed thermal energy from a shower so that the input water does not need as much energy to be heated before being used in a shower, dishwasher, or sink. The water entering a storage tank is usually close to 11 °C but by recovering the energy in the hot water from a bath or dishwasher, the temperature of the water entering the holding tank can be elevated to 25 °C, saving energy required to increase the temperature of a given amount of water by 14 °C. This water is then heated up a little further to 37 °C before leaving the tank and going to the average shower.
When recycling water from a bath (100–150 litres) or shower (50–80 litres) the waste water temperature is circa 20–25 °C. An in-house greywater recycling tank holds 150–175 litres allowing for the majority of waste water to be stored. Utilizing a built in copper heat exchange with circulation pump the residual heat is recovered and transferred to the cold feed of a combi-boiler or hot-water cylinder, reducing the energy used by the existing central heating system to heat water.
Impact and cost
Heating water accounts for 18% of the average household utility bill. Standard units save up to 60% of the heat energy that is otherwise lost down the drain when using the shower.
Installing a water heat recycler reduces energy consumption and thus greenhouse gas emissions and the overall energy dependency of the household.
Typical retail price for a domestic drain water heat recovery unit ranges from around $400 to $1,000 Canadian. For a regular household, water heating is usually about 20% of overall energy demand. The energy savings can result in an average payback time for the initial investment of 2–10 years.
A 2-year independent study of waste water heat recovery systems installed into residential houses in the UK found savings of 380kWh and 500kWh per person per year.
Industrial scale and HVAC
A heat pump can be combined with municipal sewage lines to allow a large building's HVAC system recycle the winter heat or summer cool (compared to the outside air) of water flowing out of many homes and businesses.
The reverse is also possible: heat from air conditioning and industrial chillers can be used to pre-heat water.
Heat rejected by a chiller system for providing air-conditioning to larger buildings can be recovered by installing a heat-exchanger between the incoming domestic cold water, and condenser water return.
A conventional chilled water system rejects heat gathered by the condenser water loop from the refrigerant to a cooling tower.
By diverting a fraction of mass flow rate of condenser water away from the cooling tower, and circulating it through a heat-exchanger (usually a plate-and-frame configuration), incoming domestic cold water can be pre-heated before reaching the boiler. This reduces the required increase in temperature of the water before it can be supplied to the end user, and therefore lowering boiler fuel burn.
See also
References
Building
Energy conservation
Energy harvesting
Energy recovery
Heating, ventilation, and air conditioning
Heating
Low-energy building
Recycling
Residential heating | Water heat recycling | [
"Engineering"
] | 725 | [
"Construction",
"Building"
] |
10,538,780 | https://en.wikipedia.org/wiki/High-power%20impulse%20magnetron%20sputtering | High-power impulse magnetron sputtering (HIPIMS or HiPIMS, also known as high-power pulsed magnetron sputtering, HPPMS) is a method for physical vapor deposition of thin films which is based on magnetron sputter deposition. HIPIMS utilises extremely high power densities of the order of kW⋅cm−2 in short pulses (impulses) of tens of microseconds at low duty cycle (on/off time ratio) of < 10%. Distinguishing features of HIPIMS are a high degree of ionisation of the sputtered metal and a high rate of molecular gas dissociation which result in high density of deposited films. The ionization and dissociation degree increase according to the peak cathode power. The limit is determined by the transition of the discharge from glow to arc phase. The peak power and the duty cycle are selected so as to maintain an average cathode power similar to conventional sputtering (1–10 W⋅cm−2).
HIPIMS is used for:
adhesion enhancing pretreatment of the substrate prior to coating deposition (substrate etching)
deposition of thin films with high microstructure density
HIPIMS plasma discharge
HIPIMS plasma is generated by a glow discharge where the discharge current density can reach several A⋅cm−2, whilst the discharge voltage is maintained at several hundred volts. The discharge is homogeneously distributed across the surface of the cathode (target) however above a certain threshold of current density it becomes concentrated in narrow ionization zones that move along a path known as the target erosion "racetrack".
HIPIMS generates a high density plasma of the order of 1013 ions⋅cm−3 containing high fractions of target metal ions. The main ionisation mechanism is electron impact, which is balanced by charge exchange, diffusion, and plasma ejection in flares. The ionisation rates depend on the plasma density.
The ionisation degree of the metal vapour is a strong function of the peak current density of the discharge. At high current densities, sputtered ions with charge 2+ and higher – up to 5+ for V – can be generated. The appearance of target ions with charge states higher than 1+ is responsible for a potential secondary electron emission process that has a higher emission coefficient than the kinetic secondary emission found in conventional glow discharges. The establishment of a potential secondary electron emission may enhance the current of the discharge. HIPIMS is typically operated in short pulse (impulse) mode with a low duty cycle in order to avoid overheating of the target and other system components. In every pulse the discharge goes through several stages:
electrical breakdown
gas plasma
metal plasma
steady state, which may be reached if the metal plasma is dense enough to effectively dominate over the gas plasma.
The negative voltage (bias voltage) applied to the substrate influences the energy and direction of motion of the positively charged particles that hit the substrate. The on-off cycle has a period on the order of milliseconds. Because the duty cycle is small (< 10%), only low average cathode power is the result (1–10 kW). The target can cool down during the "off time", thereby maintaining process stability.
The discharge that maintains HIPIMS is a high-current glow discharge, which is transient or quasistationary. Each pulse remains a glow up to a critical duration after which it transits to an arc discharge. If pulse length is kept below the critical duration, the discharge operates in a stable fashion indefinitely.
Initial observations by fast camera imaging in 2008 were recorded independently, demonstrated with better precision, and confirmed demonstrating that most ionization processes occur in spatially very limited ionization zones. The drift velocity was measured to be of the order of 104 m/s, which is about only 10% of the electron drift velocity.
Substrate pretreatement by HIPIMS
Substrate pretreatment in a plasma environment is required prior to deposition of thin films on mechanical components such as automotive parts, metal cutting tools and decorative fittings. The substrates are immersed in a plasma and biased to a high voltage of a few hundred volts. This causes high energy ion bombardment that sputters away any contamination. In cases when the plasma contains metal ions, they can be implanted into the substrate to a depth of a few nm. HIPIMS is used to generate a plasma with a high density and high proportion of metal ions. When looking at the film-substrate interface in cross-section, one can see a clean interface. Epitaxy or atomic registry is typical between the crystal of a nitride film and the crystal of a metal substrate when HIPIMS is used for pretreatment. HIPIMS has been used for the pretreatment of steel substrates for the first time in February 2001 by A.P. Ehiasarian.
Substrate biasing during pretreatment uses high voltages, which require purpose-designed arc detection and suppression technology. Dedicated DC substrate biasing units provide the most versatile option as they maximize substrate etch rates, minimise substrate damage, and can operate in systems with multiple cathodes. An alternative is the use of two HIPIMS power supplies synchronised in a master–slave configuration: one to establish the discharge and one to produce a pulsed substrate bias
Thin-film deposition by HIPIMS
Thin films deposited by HIPIMS at discharge current density > 0.5 A⋅cm−2 have a dense columnar structure with no voids.
The deposition of copper films by HIPIMS was reported for the first time by V. Kouznetsov for the application of filling 1 μm vias with aspect ratio of 1:1.2
Transition metal nitride (CrN) thin films were deposited by HIPIMS for the first time in February 2001 by A.P. Ehiasarian. The first thorough investigation of films deposited by HIPIMS by TEM demonstrated a dense microstructure, free of large scale defects. The films had a high hardness, good corrosion resistance and low sliding wear coefficient. The commercialisation of HIPIMS hardware that followed made the technology accessible to the wider scientific community and triggered developments in a number of areas.
Reactive HiPIMS
Similarly to what is witnessed in conventional reactive sputter deposition process, HiPIMS has also been used to attain oxide or nitride-based films on several substrates, as is seen in the list below. However, as it is characteristic of these methods, the performance of such depositions has significant hysteresis and need to be carefully examined to inspect the optimal operation points. Significant overviews of reactive HiPIMS were published by André Anders and Kubart et al..
Deposition Examples
The following materials have, among others, been deposited successfully by HIPIMS:
Corrosion Resistant: CrN/NbN nanoscale multilayer
Oxidation Resistant: CrAlYN/CrN nanoscale multilayer, Ti-Al-Si-N, Cr-Al-Si-N nanocomposite
Optical: Ag, TiO2, ZnO, InSnO, ZrO2, CuInGaSe
MAX phases: TiSiC
Microelectronics: Cu, Ti, TiN, Ta, TaN
Hard Coatings: carbon nitride CNx, Ti–C nanocomposite
Hydrophobic: HfO2
Industrial application
HIPIMS has been successfully applied for the deposition of thin films in industry, particularly on cutting tools. The first HIPIMS coating units appeared on the market in 2006.
The gold version of the Apple iPhone 12 Pro uses this process on the structural stainless steel band that also serves as the device's antenna system.
Advantages
The main advantages of HIPIMS coatings include a denser coating morphology and an increased ratio of hardness to Young's modulus compared to conventional PVD coatings. Whereas comparable conventional nano-structured coatings have a hardness of 25 GPa and a Young's modulus of 460 GPa, the hardness of the new HIPIMS coating is higher than 30 GPa with a Young's modulus of 368 GPa. The ratio between hardness and Young's modulus is a measure of the toughness properties of the coating. The desirable condition is high hardness with a relatively small Young's modulus, such as can be found in HIPIMS coatings. Recently, innovative applications of HIPIMS coated surfaces for biomedical applications were reported by Rtimi et al.
References
Further reading
External links
https://www.cemecon.de/us-en/coating-plants/cc-800-hipims
http://www.advanced-energy.com/en/SOLVIX.html
http://materials.shu.ac.uk/ncpvd
http://www.ifm.liu.se/plasma/reshppms.html
http://www.melec.de
http://www.ionautics.com/
http://www.starfireindustries.com/impulsetrade-pulsed-power-module.html
https://www.apellaser.ro/en/product/high-power-pulsed-generators-for-reliable-magnetron-sputtering/
Coatings
Materials science
Physical vapor deposition techniques
Plasma technology and applications | High-power impulse magnetron sputtering | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,926 | [
"Applied and interdisciplinary physics",
"Plasma physics",
"Plasma technology and applications",
"Coatings",
"Materials science",
"nan"
] |
4,732,308 | https://en.wikipedia.org/wiki/Ligand%20isomerism | In coordination chemistry, ligand isomerism is a type of structural isomerism in coordination complexes which arises from the presence of ligands which can adopt different isomeric forms. 1,2-Diaminopropane and 1,3-Diaminopropane are the examples that each feature a different isomer would be ligand isomers.
References
Chemical bonding | Ligand isomerism | [
"Physics",
"Chemistry",
"Materials_science"
] | 75 | [
"Chemical bonding",
"Physical chemistry stubs",
"Condensed matter physics",
"nan"
] |
4,733,324 | https://en.wikipedia.org/wiki/Xenon%20difluoride | Xenon difluoride is a powerful fluorinating agent with the chemical formula , and one of the most stable xenon compounds. Like most covalent inorganic fluorides it is moisture-sensitive. It decomposes on contact with water vapor, but is otherwise stable in storage. Xenon difluoride is a dense, colourless crystalline solid.
It has a nauseating odour and low vapor pressure.
Structure
Xenon difluoride is a linear molecule with an Xe–F bond length of in the vapor stage, and 200 pm in the solid phase. The packing arrangement in solid shows that the fluorine atoms of neighbouring molecules avoid the equatorial region of each molecule. This agrees with the prediction of VSEPR theory, which predicts that there are 3 pairs of non-bonding electrons around the equatorial region of the xenon atom.
At high pressures, novel, non-molecular forms of xenon difluoride can be obtained. Under a pressure of ~50 GPa, transforms into a semiconductor consisting of units linked in a two-dimensional structure, like graphite. At even higher pressures, above 70 GPa, it becomes metallic, forming a three-dimensional structure containing units. However, a recent theoretical study has cast doubt on these experimental results.
The Xe–F bonds are weak. XeF2 has a total bond energy of , with first and second bond energies of and , respectively. However, XeF2 is much more robust than KrF2, which has a total bond energy of only .
Chemistry
Synthesis
Synthesis proceeds by the simple reaction:
Xe + F2 → XeF2
The reaction needs heat, irradiation, or an electrical discharge. The product is a solid. It is purified by fractional distillation or selective condensation using a vacuum line.
The first published report of XeF2 was in October 1962 by Chernick, et al. However, though published later, XeF2 was probably first created by Rudolf Hoppe at the University of Münster, Germany, in early 1962, by reacting fluorine and xenon gas mixtures in an electrical discharge. Shortly after these reports, Weeks, Chernick, and Matheson of Argonne National Laboratory reported the synthesis of XeF2 using an all-nickel system with transparent alumina windows, in which equal parts xenon and fluorine gases react at low pressure upon irradiation by an ultraviolet source to give XeF2. Williamson reported that the reaction works equally well at atmospheric pressure in a dry Pyrex glass bulb using sunlight as a source. It was noted that the synthesis worked even on cloudy days.
In the previous syntheses the fluorine gas reactant had been purified to remove hydrogen fluoride. Šmalc and Lutar found that if this step is skipped the reaction rate proceeds at four times the original rate.
In 1965, it was also synthesized by reacting xenon gas with dioxygen difluoride.
Solubility
is soluble in solvents such as , , , anhydrous hydrogen fluoride, and acetonitrile, without reduction or oxidation. Solubility in hydrogen fluoride is high, at 167 g per 100 g HF at 29.95 °C.
Derived xenon compounds
Other xenon compounds may be derived from xenon difluoride. The unstable organoxenon compound can be made by irradiating hexafluoroethane to generate radicals and passing the gas over . The resulting waxy white solid decomposes completely within 4 hours at room temperature.
The XeF+ cation is formed by combining xenon difluoride with a strong fluoride acceptor, such as an excess of liquid antimony pentafluoride ():
+ → +
Adding xenon gas to this pale yellow solution at a pressure of 2–3 atmospheres produces a green solution containing the paramagnetic ion, which contains a Xe−Xe bond: ("apf" denotes solution in liquid )
3 Xe(g) + (apf) + (l) 2 (apf) + (apf)
This reaction is reversible; removing xenon gas from the solution causes the ion to revert to xenon gas and , and the color of the solution returns to a pale yellow.
In the presence of liquid HF, dark green crystals can be precipitated from the green solution at −30 °C:
(apf) + 4 (apf) → (s) + 3 (apf)
X-ray crystallography indicates that the Xe–Xe bond length in this compound is 309 pm, indicating a very weak bond. The ion is isoelectronic with the ion, which is also dark green.
Coordination chemistry
Bonding in the XeF2 molecule is adequately described by the three-center four-electron bond model.
XeF2 can act as a ligand in coordination complexes of metals. For example, in HF solution:
Mg(AsF6)2 + 4 XeF2 → [Mg(XeF2)4](AsF6)2
Crystallographic analysis shows that the magnesium atom is coordinated to 6 fluorine atoms. Four of the fluorine atoms are attributed to the four xenon difluoride ligands while the other two are a pair of cis- ligands.
A similar reaction is:
Mg(AsF6)2 + 2 XeF2 → [Mg(XeF2)2](AsF6)2
In the crystal structure of this product the magnesium atom is octahedrally-coordinated and the XeF2 ligands are axial while the ligands are equatorial.
Many such reactions with products of the form [Mx(XeF2)n](AF6)x have been observed, where M can be calcium, strontium, barium, lead, silver, lanthanum, or neodymium and A can be arsenic, antimony or phosphorus. Some of these compounds feature extraordinarily high coordination numbers at the metal center.
In 2004, results of synthesis of a solvate where part of cationic centers were coordinated solely by XeF2 fluorine atoms were published. Reaction can be written as:
2 Ca(AsF6)2 + 9 XeF2 → Ca2(XeF2)9(AsF6)4.
This reaction requires a large excess of xenon difluoride. The structure of the salt is such that half of the Ca2+ ions are coordinated by fluorine atoms from xenon difluoride, while the other Ca2+ ions are coordinated by both XeF2 and .
Applications
As a fluorinating agent
Xenon difluoride is a strong fluorinating and oxidizing agent. With fluoride ion acceptors, it forms and species which are even more powerful fluorinators.
Among the fluorination reactions that xenon difluoride undergoes are:
Oxidative fluorination:
Ph3TeF + XeF2 → Ph3TeF3 + Xe
Reductive fluorination:
2 CrO2F2 + XeF2 → 2 CrOF3 + Xe +O2
Aromatic fluorination:
Alkene fluorination:
Radical fluorination in radical decarboxylative fluorination reactions, in Hunsdiecker-type reactions where xenon difluoride is used to generate the radical intermediate as well as the fluorine transfer source, and in generating aryl radicals from aryl silanes:
is selective about which atom it fluorinates, making it a useful reagent for fluorinating heteroatoms without touching other substituents in organic compounds. For example, it fluorinates the arsenic atom in trimethylarsine, but leaves the methyl groups untouched:
+ → + Xe
XeF2 can similarly be used to prepare N-fluoroammonium salts, useful as fluorine transfer reagents in organic synthesis (e.g., Selectfluor), from the corresponding tertiary amine:
[R–(CH2CH2)3N:][] + XeF2 + NaBF4 → [R–(CH2CH2)3–F][]2 + NaF + Xe
will also oxidatively decarboxylate carboxylic acids to the corresponding fluoroalkanes:
RCOOH + XeF2 → RF + CO2 + Xe + HF
Silicon tetrafluoride has been found to act as a catalyst in fluorination by .
As an etchant
Xenon difluoride is also used as an isotropic gaseous etchant for silicon, particularly in the production of microelectromechanical systems (MEMS), as first demonstrated in 1995. Commercial systems use pulse etching with an expansion chamber
Brazzle, Dokmeci, et al. describe this process:
The mechanism of the etch is as follows. First, the XeF2 adsorbs and dissociates to xenon and fluorine atoms on the surface of silicon. Fluorine is the main etchant in the silicon etching process. The reaction describing the silicon with XeF2 is
2 XeF2 + Si → 2 Xe + SiF4
XeF2 has a relatively high etch rate and does not require ion bombardment or external energy sources in order to etch silicon.
References
Further reading
External links
WebBook page for XeF2
Xenon(II) compounds
Fluorides
Nonmetal halides
Fluorinating agents | Xenon difluoride | [
"Chemistry"
] | 2,029 | [
"Fluorinating agents",
"Reagents for organic chemistry",
"Fluorides",
"Salts"
] |
4,733,414 | https://en.wikipedia.org/wiki/Xenon%20compounds | Xenon compounds are compounds containing the element xenon (Xe). After Neil Bartlett's discovery in 1962 that xenon can form chemical compounds, a large number of xenon compounds have been discovered and described. Almost all known xenon compounds contain the electronegative atoms fluorine or oxygen. The chemistry of xenon in each oxidation state is analogous to that of the neighboring element iodine in the immediately lower oxidation state.
Halides
Three fluorides are known: , , and . XeF is theorized to be unstable. These are the starting points for the synthesis of almost all xenon compounds.
The solid, crystalline difluoride is formed when a mixture of fluorine and xenon gases is exposed to ultraviolet light. The ultraviolet component of ordinary daylight is sufficient. Long-term heating of at high temperatures under an catalyst yields . Pyrolysis of in the presence of NaF yields high-purity .
The xenon fluorides behave as both fluoride acceptors and fluoride donors, forming salts that contain such cations as and , and anions such as , , and . The green, paramagnetic is formed by the reduction of by xenon gas.
also forms coordination complexes with transition metal ions. More than 30 such complexes have been synthesized and characterized.
Whereas the xenon fluorides are well characterized, the other halides are not. Xenon dichloride, formed by the high-frequency irradiation of a mixture of xenon, fluorine, and silicon or carbon tetrachloride, is reported to be an endothermic, colorless, crystalline compound that decomposes into the elements at 80 °C. However, may be merely a van der Waals molecule of weakly bound Xe atoms and molecules and not a real compound. Theoretical calculations indicate that the linear molecule is less stable than the van der Waals complex. Xenon tetrachloride and xenon dibromide are more unstable that they cannot be synthesized by chemical reactions. They were created by radioactive decay of and , respectively.
Oxides and oxohalides
Three oxides of xenon are known: xenon trioxide () and xenon tetroxide (), both of which are dangerously explosive and powerful oxidizing agents, and xenon dioxide (XeO2), which was reported in 2011 with a coordination number of four. XeO2 forms when xenon tetrafluoride is poured over ice. Its crystal structure may allow it to replace silicon in silicate minerals. The XeOO+ cation has been identified by infrared spectroscopy in solid argon.
Xenon does not react with oxygen directly; the trioxide is formed by the hydrolysis of :
+ 3 → + 6 HF
is weakly acidic, dissolving in alkali to form unstable xenate salts containing the anion. These unstable salts easily disproportionate into xenon gas and perxenate salts, containing the anion.
Barium perxenate, when treated with concentrated sulfuric acid, yields gaseous xenon tetroxide:
+ 2 → 2 + 2 +
To prevent decomposition, the xenon tetroxide thus formed is quickly cooled into a pale-yellow solid. It explodes above −35.9 °C into xenon and oxygen gas, but is otherwise stable.
A number of xenon oxyfluorides are known, including , , , and . is formed by reacting with xenon gas at low temperatures. It may also be obtained by partial hydrolysis of . It disproportionates at −20 °C into and . is formed by the partial hydrolysis of ...
+ → + 2
...or the reaction of with sodium perxenate, . The latter reaction also produces a small amount of .
is also formed by partial hydrolysis of .
+ 2 → + 4
reacts with CsF to form the anion, while XeOF3 reacts with the alkali metal fluorides KF, RbF and CsF to form the anion.
Other compounds
Xenon can be directly bonded to a less electronegative element than fluorine or oxygen, particularly carbon. Electron-withdrawing groups, such as groups with fluorine substitution, are necessary to stabilize these compounds. Numerous such compounds have been characterized, including:
, where C6F5 is the pentafluorophenyl group.
Other compounds containing xenon bonded to a less electronegative element include and . The latter is synthesized from dioxygenyl tetrafluoroborate, , at −100 °C.
An unusual ion containing xenon is the tetraxenonogold(II) cation, , which contains Xe–Au bonds. This ion occurs in the compound , and is remarkable in having direct chemical bonds between two notoriously unreactive atoms, xenon and gold, with xenon acting as a transition metal ligand. A similar mercury complex (HgXe)(Sb3F17) (formulated as [HgXe2+][Sb2F11–][SbF6–]) is also known. Xenon reversibly complexes gaseous M(CO)5, where M=Cr, Mo, or W. p-block metals also bind noble gases: XeBeO has been observed spectroscopically and both XeBeS and FXeBO are predicted stable.
The compound contains a Xe–Xe bond, the longest element-element bond known (308.71 pm = 3.0871 Å).
In 1995, M. Räsänen and co-workers, scientists at the University of Helsinki in Finland, announced the preparation of xenon dihydride (HXeH), and later xenon hydride-hydroxide (HXeOH), hydroxenoacetylene (HXeCCH), and other Xe-containing molecules. In 2008, Khriachtchev et al. reported the preparation of HXeOXeH by the photolysis of water within a cryogenic xenon matrix. Deuterated molecules, HXeOD and DXeOH, have also been produced.
Clathrates and excimers
In addition to compounds where xenon forms a chemical bond, xenon can form clathrates—substances where xenon atoms or pairs are trapped by the crystalline lattice of another compound. One example is xenon hydrate (Xe·H2O), where xenon atoms occupy vacancies in a lattice of water molecules. This clathrate has a melting point of 24 °C. The deuterated version of this hydrate has also been produced. Another example is xenon hydride (Xe(H2)8), in which xenon pairs (dimers) are trapped inside solid hydrogen. Such clathrate hydrates can occur naturally under conditions of high pressure, such as in Lake Vostok underneath the Antarctic ice sheet. Clathrate formation can be used to fractionally distill xenon, argon and krypton.
Xenon can also form endohedral fullerene compounds, where a xenon atom is trapped inside a fullerene molecule. The xenon atom trapped in the fullerene can be observed by 129Xe nuclear magnetic resonance (NMR) spectroscopy. Through the sensitive chemical shift of the xenon atom to its environment, chemical reactions on the fullerene molecule can be analyzed. These observations are not without caveat, however, because the xenon atom has an electronic influence on the reactivity of the fullerene.
When xenon atoms are in the ground energy state, they repel each other and will not form a bond. When xenon atoms becomes energized, however, they can form an excimer (excited dimer) until the electrons return to the ground state. This entity is formed because the xenon atom tends to complete the outermost electronic shell by adding an electron from a neighboring xenon atom. The typical lifetime of a xenon excimer is 1–5 nanoseconds, and the decay releases photons with wavelengths of about 150 and 173 nm. Xenon can also form excimers with other elements, such as the halogens bromine, chlorine, and fluorine.
References
Xenon compounds
Chemical compounds by element
Explosive chemicals | Xenon compounds | [
"Chemistry"
] | 1,787 | [
"Explosive chemicals"
] |
4,735,673 | https://en.wikipedia.org/wiki/Ramberg%E2%80%93B%C3%A4cklund%20reaction | The Ramberg–Bäcklund reaction is an organic reaction converting an α-halo sulfone into an alkene in presence of a base with extrusion of sulfur dioxide. The reaction is named after the two Swedish chemists Ludwig Ramberg and Birger Bäcklund. The carbanion formed by deprotonation gives an unstable episulfone that decomposes with elimination of sulfur dioxide. This elimination step is considered to be a concerted cheletropic extrusion.
The overall transformation is the conversion of the carbon–sulfur bonds to a carbon–carbon double bond. The original procedure involved halogenation of a sulfide, followed by oxidation to the sulfone. Recently, the preferred method has reversed the order of the steps. After the oxidation, which is normally done with a peroxy acid, halogenation is done under basic conditions by use of dibromodifluoromethane for the halogen transfer step. This method was used to synthesize 1,8-diphenyl-1,3,5,7-octatetraene.
Applications
The Ramberg–Bäcklund reaction has several applications. Due to the nature of elimination, it can be applied to both small rings ,
and large rings containing a double bond .
This reaction type gives access to 1,2-dimethylenecyclohexane
and the epoxide variation access to allyl alcohols.
A recently developed application of the Ramberg–Bäcklund reaction is the synthesis of C-glycosides. The required thioethers can be prepared easily by exchange with a thiol. The application of the Ramberg–Bäcklund conditions then leads to an exocyclic vinyl ether that can be reduced to the C-nucleoside .In a variation, oxidation of a sulfamide generates a azo compound.
Substrates
The necessary α-halo sulfones are accessible through oxidation of the corresponding α-halo sulfides with peracids such as meta-chloroperbenzoic acid; oxidation of sulfides takes place selectively in the presence of alkenes and alcohols. α-Halo sulfides may in turn be synthesized through the treatment of sulfides with halogen electrophiles such as N-chlorosuccinimide or N-bromosuccinimide.
Mechanism
The sulfone group contains an acidic proton in one of the α-positions which is abstracted by a strong base (scheme 1). The negative charge placed on this position (formally a carbanion) is transferred to the halogen residing on the other α-position in a nucleophilic displacement temporarily forming a three-membered cyclic sulfone. This intermediate is unstable and releases sulfur dioxide to form the alkene. Mixtures of cis isomer and trans isomer are usually obtained.
The Favorskii rearrangement and the Eschenmoser sulfide contraction are conceptually related reactions.
References
Elimination reactions
Olefination reactions
Carbon-carbon bond forming reactions
Name reactions | Ramberg–Bäcklund reaction | [
"Chemistry"
] | 628 | [
"Olefination reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions",
"Name reactions",
"Rearrangement reactions"
] |
23,613,099 | https://en.wikipedia.org/wiki/Energy%20homeostasis | In biology, energy homeostasis, or the homeostatic control of energy balance, is a biological process that involves the coordinated homeostatic regulation of food intake (energy inflow) and energy expenditure (energy outflow). The human brain, particularly the hypothalamus, plays a central role in regulating energy homeostasis and generating the sense of hunger by integrating a number of biochemical signals that transmit information about energy balance. Fifty percent of the energy from glucose metabolism is immediately converted to heat.
Energy homeostasis is an important aspect of bioenergetics.
Definition
In the US, biological energy is expressed using the energy unit Calorie with a capital C (i.e. a kilocalorie), which equals the energy needed to increase the temperature of 1 kilogram of water by 1 °C (about 4.18 kJ).
Energy balance, through biosynthetic reactions, can be measured with the following equation:
Energy intake (from food and fluids) = Energy expended (through work and heat generated) + Change in stored energy (body fat and glycogen storage)
The first law of thermodynamics states that energy can be neither created nor destroyed. But energy can be converted from one form of energy to another. So, when a calorie of food energy is consumed, one of three particular effects occur within the body: a portion of that calorie may be stored as body fat, triglycerides, or glycogen, transferred to cells and converted to chemical energy in the form of adenosine triphosphate (ATP – a coenzyme) or related compounds, or dissipated as heat.
Energy
Intake
Energy intake is measured by the amount of calories consumed from food and fluids. Energy intake is modulated by hunger, which is primarily regulated by the hypothalamus, and choice, which is determined by the sets of brain structures that are responsible for stimulus control (i.e., operant conditioning and classical conditioning) and cognitive control of eating behavior. Hunger is regulated in part by the action of certain peptide hormones and neuropeptides (e.g., insulin, leptin, ghrelin, and neuropeptide Y, among others) in the hypothalamus.
Expenditure
Energy expenditure is mainly a sum of internal heat produced and external work. The internal heat produced is, in turn, mainly a sum of basal metabolic rate (BMR) and the thermic effect of food. External work may be estimated by measuring the physical activity level (PAL).
Imbalance
The Set-Point Theory, first introduced in 1953, postulated that each body has a preprogrammed fixed weight, with regulatory mechanisms to compensate. This theory was quickly adopted and used to explain failures in developing effective and sustained weight loss procedures. A 2019 systematic review of multiple weight change interventions on humans, including dieting, exercise and overeating, found systematic "energetic errors", the non-compensated loss or gain of calories, for all these procedures. This shows that the body cannot precisely compensate for errors in energy/calorie intake, contrary to what the Set-Point Theory hypothesizes, and potentially explaining both weight loss and weight gain such as obesity. This review was conducted on short-term studies, therefore such a mechanism cannot be excluded in the long term, as evidence is currently lacking on this timeframe.
Positive balance
A positive balance is a result of energy intake being higher than what is consumed in external work and other bodily means of energy expenditure.
The main preventable causes are:
Overeating, resulting in increased energy intake
Sedentary lifestyle, resulting in decreased energy expenditure through external work
A positive balance results in energy being stored as fat and/or muscle, causing weight gain. In time, overweight and obesity may develop, with resultant complications.
Negative balance
A negative balance or caloric deficit is a result of energy intake being less than what is consumed in external work and other bodily means of energy expenditure.
The main cause is undereating due to a medical condition such as decreased appetite, anorexia nervosa, digestive disease, or due to some circumstance such as fasting or lack of access to food. Hyperthyroidism can also be a cause.
Requirement
Normal energy requirement, and therefore normal energy intake, depends mainly on age, sex and physical activity level (PAL). The Food and Agriculture Organization (FAO) of the United Nations has compiled a detailed report on human energy requirements. An older but commonly used and fairly accurate method is the Harris-Benedict equation.
Yet, there are currently ongoing studies to show if calorie restriction to below normal values have beneficial effects, and even though they are showing positive indications in nonhuman primates it is still not certain if calorie restriction has a positive effect on longevity for humans and other primates. Calorie restriction may be viewed as attaining energy balance at a lower intake and expenditure, and is, in this sense, not generally an energy imbalance, except for an initial imbalance where decreased expenditure hasn't yet matched the decreased intake.
Society and culture
There has been controversy over energy-balance messages that downplay energy intake being promoted by food industry groups.
See also
Dynamic energy budget
Earth's energy balance
References
External links
Diagram of regulation of fat stores and hunger
Daily energy requirement calculator
Nutrition
Metabolism
Biochemistry | Energy homeostasis | [
"Chemistry",
"Biology"
] | 1,115 | [
"Biochemistry",
"Metabolism",
"nan",
"Cellular processes"
] |
23,614,364 | https://en.wikipedia.org/wiki/Hypercompact%20stellar%20system | A hypercompact stellar system (HCSS) is a dense cluster of stars around a supermassive black hole that has been ejected from the center of its host galaxy. Stars that are close to the black hole at the time of the ejection will remain bound to the black hole after it leaves the galaxy, forming the HCSS.
The term "hypercompact" refers to the fact HCSSs are small in size compared with ordinary star clusters of similar luminosity. This is because the gravitational force from the supermassive black hole keeps the stars moving in very tight orbits about the center of the cluster.
The luminous X-ray source SDSS 1113 near the galaxy Markarian 177 would be the first candidate for an HCSS. Finding an HCSS would confirm the theory of gravitational wave recoil, and would prove that supermassive black holes can exist outside galaxies.
Properties
Astronomers believe that supermassive black holes (SMBHs) can be ejected from the centers of galaxies by gravitational wave recoil. This happens when two SMBHs in a binary system coalesce, after losing energy in the form of gravitational waves. Because the gravitational waves are not emitted isotropically, some momentum is imparted to the coalescing black holes, and they feel a recoil, or "kick," at the moment of coalescence. Computer simulations suggest that the kick can be as large as 104 km/s,
which exceeds escape velocity from the centres of even the most massive galaxies.
Stars that are orbiting around the SMBH at the moment of the kick will be dragged along with the SMBH, providing their orbital velocity exceeds the kick velocity . This is what determines the size of the HCSS: its radius is roughly the radius of the orbit that has the same velocity around the SMBH as the kick velocity, or
where is the mass of the SMBH and the gravitational constant. The size works out to be roughly one-half parsec (pc) (two light-years) for a kick of 1000 km/s and a SMBH mass of 100 million solar masses. The largest HCSSs would have sizes of about 20 pc, roughly the same as a large globular cluster, and the smallest would be about a thousandth of a parsec across, smaller than any known star cluster.
The number of stars that remain bound to the SMBH after the kick depends both on , and on how densely the stars were clustered about the SMBH before the kick. A number of arguments suggest that the total stellar mass would be roughly 0.1% of the mass of the SMBH or less. The biggest HCSSs would carry perhaps a few million stars, making them comparable in luminosity to a globular cluster or ultra-compact dwarf galaxy.
Aside from being very compact, the main difference between an HCSS and an ordinary star cluster is the much greater mass of the HCSS, due to the SMBH at its centre. The SMBH itself is dark and undetectable, but its gravity causes the stars to move at much higher velocities than in an ordinary star cluster. Normal star clusters have internal velocities of a few kilometers per second, while in an HCSS, essentially all the stars are moving faster than , i.e. hundreds or thousands of kilometers per second.
If the kick velocity is less than the escape velocity from the galaxy, the SMBH will fall back toward the galaxy nucleus, oscillating many times through the galaxy before finally coming to rest. In this case, the HCSS would only exist as a distinct object for a relatively short time, of order hundreds of millions of years, before disappearing back into the galaxy nucleus. During this time the HCSS would be difficult to detect since it would be superposed on or behind the galaxy.
Even if an HCSS escapes from its host galaxy, it will remain bound to the group or cluster that contains the galaxy, since the escape velocity from a cluster of galaxies is much larger than that from a single galaxy. When observed, the HCSS will be moving more slowly than , since it will have climbed out through the gravitational potential well of the galaxy and/or cluster.
The stars in an HCSS would be similar to the types of stars that are observed in galactic nuclei. This would make the stars in an HCSS more metal-rich and younger than the stars in a typical globular cluster.
Search
Since the black hole at the center of the HCSS is essentially invisible, an HCSS would look very similar to a faint cluster of stars. Determining that an observed star cluster is a HCSS requires measuring the orbital velocities of the stars in the cluster via their Doppler shifts and verifying that they are moving much faster than expected for stars in an ordinary star cluster. This is a challenging observation to make because an HCSS would be relatively faint, requiring many hours of exposure time even on a 10m class telescope.
The most promising places to look for HCSSs are clusters of galaxies, for two reasons: first, most of the galaxies in a galaxy cluster are elliptical galaxies which are believed to have formed through mergers. A galaxy merger is a prerequisite for forming a binary SMBH, which is a prerequisite for a kick. Second, the escape velocity from a galaxy cluster is large enough that a HCSS would be retained even if it escaped from its host galaxy.
It has been estimated that the nearby Fornax and Virgo galaxy clusters may contain hundreds or thousands of HCSSs. These galaxy clusters have been surveyed for compact galaxies and star clusters. It is possible that some of the objects picked up in these surveys were HCSSs that were misidentified as ordinary star clusters. A few of the compact objects in the surveys are known to have rather high internal velocities, but none appear to be massive enough to qualify as HCSSs.
Another likely place to find a HCSS would be near the site of a recent galaxy merger.
From time to time, the black hole at the center of an HCSS will disrupt a star that passes too close, producing a very luminous flare. A few such flares have been observed at the centers of galaxies, presumably caused by a star coming too close to the SMBH in the galaxy nucleus. It has been estimated that a recoiling SMBH will disrupt about a dozen stars during the time it takes to escape from its galaxy. Since the lifetime of a flare is a few months, the chances of seeing such an event are small unless a large volume of space is surveyed. A star in a HCSS could also explode as a Type I (white dwarf) supernova.
Importance
Discovery of an HCSS would be important for several reasons.
It would constitute proof that supermassive black holes can exist outside galaxies.
It would verify the computer simulations that predict gravitational wave recoils of thousands of kilometers per second.
Existence of HCSSs would imply that some galaxies do not have supermassive black holes at their centers. This would have important consequences for theories that link the growth of galaxies to the growth of supermassive black holes, and for empirical correlations between SMBH mass and galaxy properties.
If many HCSSs could be discovered, it would be possible to reconstruct the distribution of kick velocities, which contains information about the merger history of galaxies, the masses and spins of binary black holes, etc.
See also
Numerical relativity
Stellar dynamics
Rogue black hole
References
External links
Mangled Stars Could Reveal Ejected Black Holes New Scientist article on tidal disruption flares from recoiling black holes.
Galaxies
Star clusters
Concepts in stellar astronomy
Supermassive black holes | Hypercompact stellar system | [
"Physics",
"Astronomy"
] | 1,563 | [
"Black holes",
"Concepts in astrophysics",
"Star clusters",
"Galaxies",
"Unsolved problems in physics",
"Supermassive black holes",
"Concepts in stellar astronomy",
"Astronomical objects"
] |
23,616,040 | https://en.wikipedia.org/wiki/Enantiopure%20drug | An enantiopure drug is a pharmaceutical that is available in one specific enantiomeric form. Most biological molecules (proteins, sugars, etc.) are present in only one of many chiral forms, so different enantiomers of a chiral drug molecule bind differently (or not at all) to target receptors. Chirality can be observed when the geometric properties of an object is not superimposable with its mirror image. Two forms of a molecule are formed (both mirror images) from a chiral carbon, these two forms are called enantiomers. One enantiomer of a drug may have a desired beneficial effect while the other may cause serious and undesired side effects, or sometimes even beneficial but entirely different effects. The desired enantiomer is known as an eutomer while the undesired enantiomer is known as the distomer. When equal amounts of both enantiomers are found in a mixture, the mixture is known as a racemic mixture. If a mixture for a drug does not have a 1:1 ratio of its enantiomers it is a candidate for an enantiopure drug. Advances in industrial chemical processes have made it economical for pharmaceutical manufacturers to take drugs that were originally marketed as a racemic mixture and market the individual enantiomers, either by specifically manufacturing the desired enantiomer or by resolving a racemic mixture. On a case-by-case basis, the U.S. Food and Drug Administration (FDA) has allowed single enantiomers of certain drugs to be marketed under a different name than the racemic mixture. Also case-by-case, the United States Patent Office has granted patents for single enantiomers of certain drugs. The regulatory review for marketing approval (safety and efficacy) and for patenting (proprietary rights) is independent, and differs country by country.
History
In 1848, Louis Pasteur became the first scientist to discover chirality and enantiomers while he was working with tartaric acid. During the experiments, he noticed that there were two crystal structures produced but these structures looked to be non-superimposable mirror images of each other; this observation of isomers that were non-superimposable mirror images became known as enantiomers. A couple years later, in 1857, Pasteur then discovered enantioselectivity when he noticed that the two enantiomer structures he had previously discovered metabolized at much different speeds. This suggested that one configuration was preferred over the other in vivo. As organic chemistry knowledge became more advanced, the discovery of enantioselectivity was used in the creation of enantiopure drugs.
Enantiopure drugs from chiral drugs
The formation of an enantiopure drug results from the separation of the enantiomers of a chiral drug. This separation was prompted when it was found that each enantiomer of a molecule can have different effects when used in drugs. This is because the body is very chiral selective reacting to each enantiomer differently and therefore producing different pharmaceutical effects. The use of a drug with a single enantiomer makes the drug more effective. Before a drug of a pure enantiomer can be formed, the two enantiomers must first be separated and tested. Three main techniques are used for this separation: capillary gas chromatography, high performance liquid chromatography, and capillary electrophoresis. Other technique such as chiral crystallization, enzyme-based kinetic separation, and enantioselective synthesis are also used.
Importance
The body of living organisms are composed of many enantiopure chiral substances. For example, amino acids that make up the proteins in the body have the same configuration, L-absolute configuration. Because of this specificity, vital processes such as constructing proteins, rely on stereoselectivity to ensure that out of all the potential enantiomers available, the body is utilizing the correct enantiopure compound.
Selectivity is a very important part of organic synthesis. In scientific papers regarding synthesis, selectivity is often listed in data tables alongside percent yield and other reaction conditions. While selectivity is deemed important in scientific literature, it has been challenging to effectively implement selectivity in drug development and production. A major issue with selectivity in pharmaceuticals is that a large percentage of drug syntheses by nature are not selective reactions, racemic mixtures are formed as the products. Separating racemic mixtures into their respective enantiomers takes extra time, money, and energy. One way to separate enantiomers is to chemically convert them into species that can be separated: diastereomers. Diastereomers, unlike enantiomers, have entirely different physical properties—boiling points, melting points, NMR shifts, solubilities—and they can be separated by conventional means such as chromatography or recrystallization. This is a whole extra step in the synthesis process and not desirable from a manufacturing standpoint. As a result, a number of pharmaceuticals are synthesized and marketed as a racemic mixture of enantiomers in cases where the less-effective enantiomer is benign. However, by identifying and specifically purifying the enantiomer which effectively binds to its respective binding site in the body, less of the drug would be needed to achieve the desired effect. With the improvement of chiral technology, a rich repertoire of enantioselective chromatographic methods have become available for the separation of drug enantiomers on the analytical, preparative, and industrial scales.
Criteria
According to the FDA, the stereoisomeric composition of a chiral drug should be known, and its effects should be well-characterized from pharmacologic, toxicologic, and clinical standpoints. In order to profile the different stereoisomers of enantiopure drugs, manufacturers are urged to develop quantitative assays for individual enantiomers in in vivo samples early in the development stage.
Ideally, the main pharmacologic activities of the isomers should be compared in in vitro systems in animals. During instances when toxic findings are present beyond the natural extensions of the pharmacologic effects of the drug, toxicologic evaluation of the individual isomers in question must be completed.
Patenting
When drugs are covered under patent protection, only the pharmaceutical company that holds the patent is allowed to manufacture, market, and eventually profit from them. The lifetime of the patent varies between countries and also between drugs; in the United States, most drug patents last about twenty years. Once the patent has expired, the drug can be manufactured and sold by other companies - at which point, it is referred to as a generic drug. Its availability on the market as a generic drug removes the monopoly of the patent holder, thereby encouraging competition and causing a significant drop in drug prices, which ensures that life-saving and important drugs reach the general population at fair prices. However, the company holding the initial patent may get a new patent by forming a new version of the drug that is significantly changed compared to the original compound. Patentability of different isomers has been controversial over the past ten years and there have been a number of related legal issues. In making their determinations, courts have looked at factors including: (i) Whether the racemate was known in the prior art. (ii) The difficulty in resolving the enantiomers. (iii) The stereoselectivity of the relevant receptor. (iv) Other secondary considerations of non-obviousness such as commercial success, unexpected results, and satisfaction of long-felt needs in the art. The decisions made regarding these issues have varied and there is no clear answer to the legality of patenting stereoisomers. These issues have been resolved on a case-by-case basis. With the number of current pharmaceuticals currently being marketed as racemic mixtures, it is likely that patentability will continue to be debated in the near future.
There are examples of common drugs, like ibuprofen, where the use of chiral switching has caused controversy. Ibuprofen is a racemic mixture where the S-enantiomer is known to play a major role in reducing inflammation as it inhibits COX-2 (cooxygenase 2) compared to the R-enantiomer; the fact that the S-enantiomer is stronger is what led to the chiral switching. But, when the racemic ibuprofen enters the body, a little over half of the R-enantiomers experience chiral inversion and transform into the favored S-enantiomer. This observation has led to a conclusion that the racemic and the S-enantiomer are potentially biologically equivalent. Because of this and the more recent evidence suggesting that the R-enantiomer may actually contribute to COX-2 inhibition, as well, but at a slower rate, there is still debate on whether or not the chiral switching seen in ibuprofen is really advantageous or if it is just to give patent protections to the manufacturers.
Examples
The following table lists pharmaceuticals that have been available in both racemic and single-enantiomer form. These single-enantiomer drug switched from the respective racemic drug are referred to as chiral switch.
The following are cases where the individual enantiomers have markedly different effects:
Thalidomide: Thalidomide is racemic. One enantiomer is effective against morning sickness, whereas the other is teratogenic. However, the enantiomers are converted into each other in vivo. As a result, dosing with a single-enantiomer form of the drug will still lead to both the enantiomers eventually being present in the patient's serum and thus would not prevent adverse effects—at best, it might reduce them if the rate of in vivo conversion can be slowed.
Ethambutol: Whereas the (S,S)-(+)-enantiomer is used to treat tuberculosis, the (R,R)-(–)-ethambutol may cause blindness.
Steroid receptor sites also show stereoisomer specificity.
Penicillin's activity is stereodependent. The antibiotic must mimic the D-alanine chains that occur in the cell walls of bacteria in order to react with and subsequently inhibit bacterial transpeptidase enzyme.
Propranolol: L-propranolol is a powerful adrenoceptor antagonist, whereas D-propranolol is not. However, both have local anesthetic effect.
Methorphan: The L-isomer of methorphan, levomethorphan, is a potent opioid analgesic, while the D-isomer, dextromethorphan, is a dissociative cough suppressant.
Carvedilol: (S)-(–)-isomer interacts with adrenoceptors with 100 times greater potency as β adrenoreceptor blocker than (R)-(+)-isomer. However, both the isomers are approximately equipotent as α adrenoreceptor blockers.
Amphetamine and methamphetamine: The D-isomers of these drugs are strong central nervous system (CNS) stimulants, while the L-isomers lack appreciable CNS stimulant effects, but instead stimulate the peripheral nervous system. For this reason, the L-isomer of methamphetamine is available as an over-the-counter nasal inhaler in some countries, while the D-isomer is banned from medical use in all but a few countries in the world, and highly regulated in those countries which do allow it to be used medically.
Ketamine: This drug is available as a mixture of both (S)-(+)-ketamine, also known as esketamine, and (R)-(–)-ketamine, also known as arketamine. Pure esketamine is also available. The two have different dissociative and hallucinogenic properties, with esketamine being more potent in isolation as a dissociative. The two enantiomers have inverse effects on the rate of glucose metabolism in the frontal cortex.
Dihydroxy-3, 4 phenylalanine (Dopa): Dopa is a racemic mixture where one enantiomer, L-Dopa, is used as a treatment for Parkinson's Disease, and the other enantiomer, D-Dopa is considered to be toxic. D-Dopa can cause headaches, abdominal pains, nausea, vomiting, and dizziness.
See also
Eudysmic ratio
Chiral switch
Chiral drugs
References
External links
Chirality
Stereochemistry | Enantiopure drug | [
"Physics",
"Chemistry",
"Biology"
] | 2,694 | [
"Pharmacology",
"Origin of life",
"Biochemistry",
"Stereochemistry",
"Enantiopure drugs",
"Chirality",
"Space",
"nan",
"Asymmetry",
"Biological hypotheses",
"Spacetime",
"Symmetry"
] |
19,583,778 | https://en.wikipedia.org/wiki/Alternative%20stress%20measures | In continuum mechanics, the most commonly used measure of stress is the Cauchy stress tensor, often called simply the stress tensor or "true stress". However, several alternative measures of stress can be defined:
The Kirchhoff stress ().
The nominal stress ().
The Piola–Kirchhoff stress tensors
The first Piola–Kirchhoff stress (). This stress tensor is the transpose of the nominal stress ().
The second Piola–Kirchhoff stress or PK2 stress ().
The Biot stress ()
Definitions
Consider the situation shown in the following figure. The following definitions use the notations shown in the figure.
In the reference configuration , the outward normal to a surface element is and the traction acting on that surface (assuming it deforms like a generic vector belonging to the deformation) is leading to a force vector . In the deformed configuration , the surface element changes to with outward normal and traction vector leading to a force . Note that this surface can either be a hypothetical cut inside the body or an actual surface. The quantity is the deformation gradient tensor, is its determinant.
Cauchy stress
The Cauchy stress (or true stress) is a measure of the force acting on an element of area in the deformed configuration. This tensor is symmetric and is defined via
or
where is the traction and is the normal to the surface on which the traction acts.
Kirchhoff stress
The quantity,
is called the Kirchhoff stress tensor, with the determinant of . It is used widely in numerical algorithms in metal plasticity (where there
is no change in volume during plastic deformation). It can be called weighted Cauchy stress tensor as well.
Piola–Kirchhoff stress
Nominal stress/First Piola–Kirchhoff stress
The nominal stress is the transpose of the first Piola–Kirchhoff stress (PK1 stress, also called engineering stress) and is defined via
or
This stress is unsymmetric and is a two-point tensor like the deformation gradient.
The asymmetry derives from the fact that, as a tensor, it has one index attached to the reference configuration and one to the deformed configuration.
Second Piola–Kirchhoff stress
If we pull back to the reference configuration we obtain the traction acting on that surface before the deformation assuming it behaves like a generic vector belonging to the deformation. In particular we have
or,
The PK2 stress () is symmetric and is defined via the relation
Therefore,
Biot stress
The Biot stress is useful because it is energy conjugate to the right stretch tensor . The Biot stress is defined as the symmetric part of the tensor where is the rotation tensor obtained from a polar decomposition of the deformation gradient. Therefore, the Biot stress tensor is defined as
The Biot stress is also called the Jaumann stress.
The quantity does not have any physical interpretation. However, the unsymmetrized Biot stress has the interpretation
Relations
Relations between Cauchy stress and nominal stress
From Nanson's formula relating areas in the reference and deformed configurations:
Now,
Hence,
or,
or,
In index notation,
Therefore,
Note that and are (generally) not symmetric because is (generally) not symmetric.
Relations between nominal stress and second P–K stress
Recall that
and
Therefore,
or (using the symmetry of ),
In index notation,
Alternatively, we can write
Relations between Cauchy stress and second P–K stress
Recall that
In terms of the 2nd PK stress, we have
Therefore,
In index notation,
Since the Cauchy stress (and hence the Kirchhoff stress) is symmetric, the 2nd PK stress is also symmetric.
Alternatively, we can write
or,
Clearly, from definition of the push-forward and pull-back operations, we have
and
Therefore, is the pull back of by and is the push forward of .
Summary of conversion formula
Key:
See also
Stress (physics)
Finite strain theory
Continuum mechanics
Hyperelastic material
Cauchy elastic material
Critical plane analysis
References
Solid mechanics
Continuum mechanics
Gustav Kirchhoff
Tensor physical quantities | Alternative stress measures | [
"Physics",
"Mathematics",
"Engineering"
] | 845 | [
"Solid mechanics",
"Tensors",
"Physical quantities",
"Continuum mechanics",
"Quantity",
"Tensor physical quantities",
"Classical mechanics",
"Mechanics"
] |
19,584,690 | https://en.wikipedia.org/wiki/Wood%20ash | Wood ash is the powdery residue remaining after the combustion of wood, such as burning wood in a fireplace, bonfire, or an industrial power plant. It is largely composed of calcium compounds, along with other non-combustible trace elements present in the wood, and has been used for many purposes throughout history.
Composition
Variability in assessment
A comprehensive set of analyses of wood ash composition from many tree species has been carried out by Emil Wolff, among others. Several factors have a major impact on the composition:
Fine ash: Some studies include the solids escaping via the flue during combustion, while others do not.
Temperature of combustion. Ash content yield decreases with increasing combustion temperature which produces two direct effects:
Dissociation: Conversion of carbonates, sulfides, etc., to oxides results in no carbon, sulfur, carbonates, or sulfides. Some metallic oxides (e.g. mercuric oxide) even dissociate to their elemental state and/or vaporize completely at wood fire temperatures (.)
Volatilization: In studies in which the escaped ash is not measured, some combustion products may not be present at all. Arsenic for example is not volatile, but arsenic trioxide is (boiling point: ).
Experimental process: If the ashes are exposed to the environment between combustion and the analysis, oxides may convert back to carbonates by reacting with carbon dioxide in the air. Hygroscopic substances meanwhile may absorb atmospheric moisture.
Type, age, and growing environment of the wood stock affect the composition of the wood (e.g. hardwood and softwood), and thus the ash. Hardwoods usually produce more ash than softwoods with bark and leaves producing more than internal parts of the trunk.
Measurements
The burning of wood results in about 6–10% ashes on average. The residue ash of 0.43 and 1.82 percent of the original mass of burned wood (assuming dry basis, meaning that H2O is driven off) is produced for certain woods if it is pyrolized until all volatiles disappear and it is burned at for 8 hours. Also the conditions of the combustion affect the composition and amount of the residue ash, thus higher temperature will reduce the ash yield.
Elemental analysis
Typically, wood ash contains the following major elements:
Carbon (C) — 5–30%.
Calcium (Ca) — 7–33%
Potassium (K) — 3–10%
Magnesium (Mg) — 1–2%
Manganese (Mn) — 0.3–1.3%
Phosphorus (P) — 0.3–1.4%
Sodium (Na) — 0.2–0.5%.
Chemical compounds
As the wood burns, it produces different compounds depending on the temperature used. Some studies cite calcium carbonate () as the major constituent, others find no carbonate at all but calcium oxide () instead. The latter is produced at higher temperatures (see calcination). The equilibrium reaction CaCO3 → CO2 + CaO has its equilibrium shifted leftward at and high partial pressure (such as in a wood fire) but shifted rightward at or when partial pressure is reduced.
Much of wood ash contains calcium carbonate (CaCO3) as its major component, representing 25% or even 45% of total ash weight. At CaCO3 and K2CO3 were identified in one case. Less than 10% is potash, and less than 1% is phosphate.
Trace elements
There are trace elements of iron (Fe), manganese (Mn), zinc (Zn), copper (Cu) and some heavy metals. Their concentrations in ash vary due to combustion temperature. Decomposition of carbonates and the volatilization of potassium (K), sulfur (S), and trace amounts of copper (Cu) and boron (B) may result from increased temperature. The study has found that at raised temperature K, S, B, sodium (Na) and copper (Cu) decreased, whereas Mg, P, Mn, Al, Fe, and Si did not change relative to calcium (Ca). All of these trace elements are, however, present in the form of oxides at higher temperature of combustion. Some elements in wood ash (all fractions given in mass of elements per mass of ash) include:
Fe 1.6-55 ‰
Si 6-170 ‰
Al 1.2-45 ‰
Mn 1-20 ‰
As 0.6-50 ppm
Cd 0.18-60 ppm
Pb 2-500 ppm
Cr 12-280 ppm
Ni 10-140 ppm
V 1.8-120 ppm
Fuels
One study has determined that a slowly burning wood ( ) emissions typically include 16 alkenes, 5 alkadienes, 5 alkynes and several alkanes and arenes in proportions. Ethene, acetylene and benzene were a major part at efficient combustion. Proportion of C3-C7 alkenes were found to be higher for smouldering. Benzene and 1,3-butadiene constituted ~10–20% and ~1–2% by mass of total non-methane hydrocarbons.
Uses
Fertilizers
Wood ash can be used as a fertilizer used to enrich agricultural soil nutrition. In this role, wood ash serves as a source of potassium and calcium carbonate, the latter acting as a liming agent to neutralize acidic soils.
Wood ash can also be used as an amendment for organic hydroponic solutions, generally replacing inorganic compounds containing calcium, potassium, magnesium and phosphorus.
Composts
Wood ash is commonly disposed of in landfills, but with rising disposal costs, ecologically friendly alternatives, such as serving as compost for agricultural and forestry applications, are becoming more popular. Because wood ash has a high char content, it can be used as an odor control agent, especially in composting operations.
Pottery
Wood ash has a very long history of being used in ceramic glazes, particularly in the Chinese, Japanese and Korean traditions, though now used by many craft potters. It acts as a flux, reducing the melting point of the glaze.
Soaps
For thousands of years, plant or wood ash was leached with water, to yield an impure solution of potassium carbonate. This product could be mixed with oils or fats to produce a soft "soap" or soap like-product, as was done in ancient Sumeria, Europe, and Egypt. However only certain types of plants could produce a soap that actually lathered. Later, medieval European soapmakers treated the wood ash solution with slaked lime, which contains calcium hydroxide, to get a hydroxide-rich solution for soapmaking. However it was not until the invention of the Leblanc process that high quality sodium hydroxide could be mass produced, rendering obsolete the earlier forms of soap using crude wood or plant ash. This was a revolutionary discovery that facilitated the modern soapmaking industry.
Bio-leaching
The ectomycorrhizal fungi Suillus granulatus and Paxillus involutus can release elements from wood ash.
Food preparation
Wood ash is sometimes used in the process of nixtamalization, where certain types of corn (typically maize or sorghum) are soaked and cooked in an alkali solution to improve nutritional content and decrease risk of mycotoxins. The alkali solution has historically been made from wood ash lye.
Nixtamalization was originally practiced in Mesoamerica, from which it spread northwards through various indigenous tribes of North America. In eastern North America, nixtamalized corn was traditionally eaten in porridges and stews, a dish that Europeans would call hominy. Wood ash is also used as a preservative for some kinds of cheese, such as Morbier and Humboldt Fog.
An early leavened bread was baked as early as 6000 BC by the Sumerians by placing the bread on heated stones and covering it with hot ash. The minerals in the wood ash could have supplemented the nutritional content of the dough as it was baked. In present day, the amount of wood ash content in bread flour, as measured by the Chopin alveograph, is strictly regulated by France.
See also
Ash burner (traditional occupation)
Bottom ash
Charcoal
Fly ash
Joss paper
Open burning of waste
Wood glue
Wood preservative
Wood veneer
Notes
References
Waste
Incineration
Organic fertilizers
Types of ash
de:Asche
es:Ceniza
fr:Cendre
it:Cenere
ht:Sann | Wood ash | [
"Physics",
"Chemistry",
"Engineering"
] | 1,751 | [
"Types of ash",
"Combustion engineering",
"Incineration",
"Materials",
"Combustion",
"Waste",
"Matter"
] |
8,150,458 | https://en.wikipedia.org/wiki/Agostic%20interaction | In organometallic chemistry, agostic interaction refers to the intramolecular interaction of a coordinatively-unsaturated transition metal with an appropriately situated C−H bond on one of its ligands. The interaction is the result of two electrons involved in the C−H bond interaction with an empty d-orbital of the transition metal, resulting in a three-center two-electron bond. It is a special case of a C–H sigma complex. Historically, agostic complexes were the first examples of C–H sigma complexes to be observed spectroscopically and crystallographically, due to intramolecular interactions being particularly favorable and more often leading to robust complexes. Many catalytic transformations involving oxidative addition and reductive elimination are proposed to proceed via intermediates featuring agostic interactions. Agostic interactions are observed throughout organometallic chemistry in alkyl, alkylidene, and polyenyl ligands.
History
The term agostic, derived from the Ancient Greek word for "to hold close to oneself", was coined by Maurice Brookhart and Malcolm Green, on the suggestion of the classicist Jasper Griffin, to describe this and many other interactions between a transition metal and a C−H bond. Often such agostic interactions involve alkyl or aryl groups that are held close to the metal center through an additional σ-bond.
Short interactions between hydrocarbon substituents and coordinatively unsaturated metal complexes have been noted since the 1960s. For example, in tris(triphenylphosphine) ruthenium dichloride, a short interaction is observed between the ruthenium(II) center and a hydrogen atom on the ortho position of one of the nine phenyl rings. Complexes of borohydride are described as using the three-center two-electron bonding model.
The nature of the interaction was foreshadowed in main group chemistry in the structural chemistry of trimethylaluminium.
Characteristics of agostic bonds
Agostic interactions are best demonstrated by crystallography. Neutron diffraction data have shown that C−H and M┄H bond distances are 5-20% longer than expected for isolated metal hydride and hydrocarbons. The distance between the metal and the hydrogen is typically 1.8–2.3 Å, and the M┄H−C angle is in the range of 90°–140°. The presence of a 1H NMR signal that is shifted upfield from that of a normal aryl or alkane, often to the region normally assigned to hydride ligands. The coupling constant 1JCH is typically lowered to 70–100 Hz versus the 125 Hz expected for a normal sp3 carbon–hydrogen bond.
Strength of bond
On the basis of experimental and computational studies, the stabilization arising from an agostic interaction is estimated to be 10–15 kcal/mol. Recent calculations using compliance constants point to a weaker stabilisation (<10 kcal/mol). Thus, agostic interactions are stronger than most hydrogen bonds. Agostic bonds sometimes play a role in catalysis by increasing 'rigidity' in transition states. For instance, in Ziegler–Natta catalysis the highly electrophilic metal center has agostic interactions with the growing polymer chain. This increased rigidity influences the stereoselectivity of the polymerization process.
Related bonding interactions
The term agostic is reserved to describe two-electron, three-center bonding interactions between carbon, hydrogen, and a metal. Two-electron three-center bonding is clearly implicated in the complexation of H2, e.g., in W(CO)3(PCy3)2H2, which is closely related to the agostic complex shown in the figure. Silane binds to metal centers often via agostic-like, three-centered Si┄H−M interactions. Because these interactions do not include carbon, however, they are not classified as agostic.
Anagostic bonds
Certain M┄H−C interactions are not classified as agostic but are described by the term anagostic. Anagostic interactions are more electrostatic in character. In terms of structures of anagostic interactions, the M┄H distances and M┄H−C angles fall into the ranges 2.3–2.9 Å and 110°–170°, respectively.
Function
Agostic interactions serve a key function in alkene polymerization and stereochemistry, as well as migratory insertion.
References
External links
Agostic interactions
Organometallic chemistry
Chemical bonding | Agostic interaction | [
"Physics",
"Chemistry",
"Materials_science"
] | 928 | [
"Chemical bonding",
"Organometallic chemistry",
"Condensed matter physics",
"nan"
] |
8,151,109 | https://en.wikipedia.org/wiki/Polywell | The polywell is a proposed design for a fusion reactor using an electric and magnetic field to heat ions to fusion conditions.
The design is related to the fusor, the high beta fusion reactor, the magnetic mirror, and the biconic cusp. A set of electromagnets generates a magnetic field that traps electrons. This creates a negative voltage, which attracts positive ions. As the ions accelerate towards the negative center, their kinetic energy rises. Ions that collide at high enough energies can fuse.
Mechanism
Fusor Heating
A Farnsworth-Hirsch fusor consists of two wire cages, one inside the other, often referred to as grids, that are placed inside a vacuum chamber. The outer cage has a positive voltage versus the inner cage. A fuel, typically, deuterium gas, is injected into this chamber. It is heated past its ionization temperature, making positive ions. The ions are positive and move towards the negative inner cage. Those that miss the wires of the inner cage fly through the center of the device at high speeds and can fly out the other side of the inner cage. As the ions move outward, a Coulomb force impels them back towards the center. Over time, a core of ionized gas can form inside the inner cage. Ions pass back and forth through the core until they strike either the grid or another nucleus. Most nucleus strikes do not result in fusion. Grid strikes can raise the temperature of the grid as well as eroding it. These strikes conduct mass and energy away from the plasma, as well as spall off metal ions into the gas, which cools it.
In fusors, the potential well is made with a wire cage. Because most of the ions and electrons fall onto the cage, fusors suffer from high conduction losses. Hence, no fusor has come close to energy break-even.
Diamagnetic Plasma Trapping
The Polywell is attempting to hold a diamagnetic plasma - a material which rejects the outside magnetic fields created by the electromagnets.
Most plasma in most fusion reactors (such as Magnetic mirrors, tokamaks and Stellarators) are considered magnetized. A Magnetized plasma occurs when the external field is so strong that it completely penetrates and controls the plasma, such that the material behavior is dominated by the external field.
Some fusion plasmas are self-magnetized (such as field-reversed configurations, or Dynomaks) all of which can create their own weak magnetic fields through the formation of loops of plasma currents and other structures.
Both the Polywell and the high beta fusion reactor pre-suppose that the plasma self-generated field is so strong that it will reject the outside field. Bussard later called this type of confinement the Wiffle-Ball. This analogy was used to describe electron trapping inside the field. Marbles can be trapped inside a Wiffle ball, a hollow, perforated sphere; if marbles are put inside, they can roll and sometimes escape through the holes in the sphere. The magnetic topology of a high-beta polywell acts similarly with electrons. In June 2014 EMC2 published a preprint providing (1) x-ray and (2) flux loop measurements that the diamagnetic effect will impact the external field.
According to Bussard, typical cusp leakage rate is such that an electron makes 5 to 8 passes before escaping through a cusp in a standard mirror confinement biconic cusp; 10 to 60 passes in a polywell under mirror confinement (low beta) that he called cusp confinement; and several thousand passes in Wiffle-Ball confinement (high beta).
In February 2013, Lockheed Martin Skunk Works announced a new compact fusion machine, the high beta fusion reactor, that may be related to the biconic cusp and the polywell, and working at β = 1.
Other Trapping Mechanisms
Magnetic mirror
Magnetic mirror dominates in low beta designs. Both ions and electrons are reflected from high to low density fields. This is known as the magnetic mirror effect. The polywell's rings are arranged so the densest fields are on the outside, trapping electrons in the center. This can trap particles at low beta values.
Cusp confinement
In high beta conditions, the machine may operate with cusp confinement. This is an improvement over the simpler magnetic mirror. The MaGrid has six point cusps, each located in the middle of a ring; and two highly modified line cusps, linking the eight corner cusps located at cube vertices. The key is that these two line cusps are much narrower than the single line cusp in magnetic mirror machines, so the net losses are less. The two line cusps losses are similar to or lower than the six face-centered point cusps. In 1955, Harold Grad theorized that a high-beta plasma pressure combined with a cusped magnetic field would improve plasma confinement. A diamagnetic plasma rejects the external fields and plugs the cusps. This system would be a much better trap.
Cusped confinement was explored theoretically and experimentally. However, most cusped experiments failed and disappeared from national programs by 1980.
Beta in Magnetic Traps
Magnetic fields exert a pressure on the plasma. Beta is the ratio of plasma pressure to the magnetic field strength. It can be defined separately for electrons and ions. The polywell concerns itself only for the electron beta, whereas the ion beta is of greater interest within Tokamak and other neutral-plasma machines. The two vary by a very large ratio, because of the enormous difference in mass between an electron and any ion. Typically, in other devices the electron beta is neglected, as the ion beta determines more important plasma parameters. This is a significant point of confusion for scientists more familiar with more 'conventional' fusion plasma physics.
Note that for the electron beta, only the electron number density and temperature are used, as both of these, but especially the latter, can vary significantly from the ion parameters at the same location.
Most experiments on polywells involve low-beta plasma regimes (where β < 1), where the plasma pressure is weak compared to the magnetic pressure. Several models describe magnetic trapping in polywells. Tests indicated that plasma confinement is enhanced in a magnetic cusp configuration when β (plasma pressure/magnetic field pressure) is of order unity. This enhancement is required for a fusion power reactor based on cusp confinement to be feasible.
Design
The main problem with the fusor is that the inner cage conducts away too much energy and mass. The solution, suggested by Robert Bussard and Oleg Lavrentiev, was to replace the negative cage with a "virtual cathode" made of a cloud of electrons.
A polywell consists of several parts. These are put inside a vacuum chamber
A set of positively charged electromagnet coils arranged in a polyhedron. The most common arrangement is a six sided cube. The six magnetic poles are pointing in the same direction toward the center. The magnetic field vanishes at the center by symmetry, creating a null point.
Electron guns facing ring axis. These shoot electrons into the center of the ring structure. Once inside, the electrons are confined by the magnetic fields. This has been measured in polywells using Langmuir probes. Electrons that have enough energy to escape through the magnetic cusps can be re-attracted to the positive rings. They can slow down and return to the inside of the rings along the cusps. This reduces conduction losses, and improves the overall performance of the machine. The electrons act as a negative voltage drop attracting positive ions. This is a virtual cathode.
Gas puffers at corner. Gas is puffed inside the rings where it ionizes at the electron cloud. As ions fall down the potential well, the electric field works on them, heating it to fusion conditions. The ions build up speed. They can slam together in the center and fuse. Ions are electrostatically confined raising the density and increasing the fusion rate.
The magnetic energy density required to confine electrons is far smaller than that required to directly confine ions, as is done in other fusion projects such as ITER.
Other behavior
Single-electron motion
As an electron enters a magnetic field, it feels a Lorentz force and corkscrews. The radius of this motion is the gyroradius. As it moves it loses some energy as x-rays, every time it changes speed. The electron spins faster and tighter in denser fields, as it enters the MaGrid. Inside the MaGrid, single electrons travel straight through the null point, due to their infinite gyroradius in regions of no magnetic field. Next, they head towards the edges of the MaGrid field and corkscrew tighter along the denser magnetic field lines. This is typical electron cyclotron resonance motion. Their gyroradius shrinks and when they hit a dense magnetic field they can be reflected using the magnetic mirror effect. Electron trapping has been measured in polywells with Langmuir probes.
The polywell attempts to confine the ions and electrons through two different means, borrowed from fusors and magnetic mirrors. The electrons are easier to confine magnetically because they have so much less mass than the ions. The machine confines ions using an electric field in the same way a fusor confines the ions: in the polywell, the ions are attracted to the negative electron cloud in the center. In the fusor, they are attracted to a negative wire cage in the center.
Plasma recirculation
Plasma recirculation would significantly improve the function of these machines. It has been argued that efficient recirculation is the only way they can be viable. Electrons or ions move through the device without striking a surface, reducing conduction losses. Bussard stressed this; specifically emphasizing that electrons need to move through all cusps of the machine.
Models of energy distribution
it had not been determined conclusively what the ion or electron energy distribution is. The energy distribution of the plasma can be measured using a Langmuir probe. This probe absorbs charge from the plasma as its voltage changes, making an I-V Curve. From this signal, the energy distribution can be calculated. The energy distribution both drives and is driven by several physical rates, the electron and ion loss rate, the rate of energy loss by radiation, the fusion rate and the rate of non-fusion collisions. The collision rate may vary greatly across the system:
At the edge: where ions are slow and the electrons are fast.
At the center: where ions are fast and electrons are slow.
Critics claimed that both the electrons and ion populations have bell curve distribution; that the plasma is thermalized. The justification given is that the longer the electrons and ions move inside the polywell, the more interactions they undergo leading to thermalization. This model for the ion distribution is shown in Figure 5.
Supporters modeled a nonthermal plasma. The justification is the high amount of scattering in the device center. Without a magnetic field, electrons scatter in this region. They claimed that this scattering leads to a monoenergetic distribution, like the one shown in Figure 6. This argument is supported by 2 dimensional particle-in-cell simulations. Bussard argued that constant electron injection would have the same effect. Such a distribution would help maintain a negative voltage in the center, improving performance.
Considerations for net power
Fuel type
Nuclear fusion refers to nuclear reactions that combine lighter nuclei to become heavier nuclei. All chemical elements can be fused; for elements with fewer protons than iron, this process changes mass into energy that can potentially be captured to provide fusion power.
The probability of a fusion reaction occurring is controlled by the cross section of the fuel, which is in turn a function of its temperature. The easiest nuclei to fuse are deuterium and tritium. Their fusion occurs when the ions reach 4 keV (kiloelectronvolts), or about 45 million kelvins. The Polywell would achieve this by accelerating an ion with a charge of 1 down a 4,000 volt electric field. The high cost, short half-life and radioactivity of tritium make it difficult to work with.
The second easiest reaction is to fuse deuterium with itself. Because of its low cost, deuterium is commonly used by Fusor amateurs. Bussard's polywell experiments were performed using this fuel. Fusion of deuterium or tritium produces a fast neutron, and therefore produces radioactive waste. Bussard's choice was to fuse boron-11 with protons; this reaction is aneutronic (does not produce neutrons). An advantage of p-11B as a fusion fuel is that the primary reactor output would be energetic alpha particles, which can be directly converted to electricity at high efficiency using direct energy conversion. Direct conversion has achieved a 48% power efficiency against 80–90% theoretical efficiency.
Lawson criterion
The energy generated by fusion inside a hot plasma cloud can be found with the following equation:
where:
is the fusion power density (energy per time per volume),
n is the number density of species A or B (particles per volume),
is the product of the collision cross-section σ (which depends on the relative velocity) and the relative velocity of the two species v, averaged over all the particle velocities in the system.
Energy varies with temperature, density, collision speed and fuel. To reach net power production, reactions must occur rapidly enough to make up for energy losses. Plasma clouds lose energy through conduction and radiation. Conduction is when ions, electrons or neutrals touch a surface and escape. Energy is lost with the particle. Radiation is when energy escapes as light. Radiation increases with temperature. To get net power from fusion, these losses must be overcome. This leads to an equation for power output.
Net Power = Efficiency × (Fusion − Radiation Loss − Conduction Loss)
Net Power — power output
Efficiency — fraction of energy needed to drive the device and convert it to electricity.
Fusion — energy generated by the fusion reactions.
Radiation — energy lost as light, leaving the plasma.
Conduction — energy lost, as mass leaves the plasma.
Lawson used this equation to estimate conditions for net power based on a Maxwellian cloud.
However, the Lawson criterion does not apply for Polywells if Bussard's conjecture that the plasma is nonthermal is correct. Lawson stated in his founding report: "It is of course easy to postulate systems in which the velocity distribution of the particle is not Maxwellian. These systems are outside the scope of this report." He also ruled out the possibility of a nonthermal plasma to ignite: "Nothing may be gained by using a system in which electrons are at a lower temperature [than ions]. The energy loss in such a system by transfer to the electrons will always be greater than the energy which would be radiated by the electrons if they were the [same] temperature."
Criticism
There are several general criticisms of the Polywell:
The heating mechanism breaks the quasi-neutral assumption. It is not easy or possible to concentrate negative charge robustly or for any long period.
The plasma does not behave diamagnetically as presupposed. This challenges the basic trapping effect.
Without a solid heating method, the plasma loses huge amounts of energy to radiation, becoming too cold to fuse (see Rider work below).
With ions flying in from all directions, there is a buildup in angular momentum, leading to lots of ions being scattered out of the trap (see Nevins' work below).
Rider Critique
Todd Rider (a biological engineer and former student of plasma physics) calculated that X-ray radiation losses with this fuel would exceed fusion power production by at least 20%. Rider's model used the following assumptions:
The plasma was quasineutral. Therefore, positives and negatives equally mixed together.
The fuel was evenly mixed throughout the volume.
The plasma was isotropic, meaning that its behavior was the same in any given direction.
The plasma had a uniform energy and temperature throughout the cloud.
The plasma was an unstructured Gaussian sphere, with a strongly converged core that represented a small (~1%) part of the total volume. Nevins challenged this assumption, stating that the particles would build up angular momentum, causing the dense core to degrade. The loss of density inside the core would reduce fusion rates.
The potential well was broad and flat.
Based on these assumptions, Rider used general equations to estimate the rates of different physical effects. These included the loss of ions to up-scattering, the ion thermalization rate, the energy loss due to X-ray radiation and the fusion rate. His conclusions were that the device suffered from "fundamental flaws".
By contrast, Bussard argued that the plasma had a different structure, temperature distribution and well profile. These characteristics have not been fully measured and are central to the device's feasibility. Bussard's calculations indicated that the bremsstrahlung losses would be much smaller. According to Bussard the high speed and therefore low cross section for Coulomb collisions of the ions in the core makes thermalizing collisions very unlikely, while the low speed at the rim means that thermalization there has almost no impact on ion velocity in the core. Bussard calculated that a polywell reactor with a radius of 1.5 meters would produce net power fusing deuterium.
Other studies disproved some of the assumptions made by Rider and Nevins, arguing the real fusion rate and the associated recirculating power (needed to overcome the thermalizing effect and sustain the non-Maxwellian ion profile) could be estimated only with a self-consistent collisional treatment of the ion distribution function, lacking in Rider's work.
Energy capture
It has been proposed that energy may be extracted from polywells using heat capture or, in the case of aneutronic fusion like D-3He or p-11B, direct energy conversion, though that scheme faces challenges. The energetic alpha particles (up to a few MeV) generated by the aneutronic fusion reaction would exit the MaGrid through the six axial cusps as cones (spread ion beams). Direct conversion collectors inside the vacuum chamber would convert the alpha particles' kinetic energy to a high-voltage direct current. The alpha particles must slow down before they contact the collector plates to realize high conversion efficiency. In experiments, direct conversion has demonstrated a conversion efficiency of 48%.
History
In the late 1960s several investigations studied polyhedral magnetic fields as a possibility to confine a fusion plasma. The first proposal to combine this configuration with an electrostatic potential well in order to improve electron confinement was made by Oleg Lavrentiev in 1975. The idea was picked up by Robert Bussard in 1983. His 1989 patent application cited Lavrentiev, although in 2006 he appears to claim to have (re)discovered the idea independently.
HEPS
Research was funded first by the Defense Threat Reduction Agency beginning in 1987 and later by DARPA. This funding resulted in a machine known as the high energy power source (HEPS) experiment. It was built by Directed Technologies Inc. This machine was a large (1.9 m across) machine, with the rings outside the vacuum chamber. This machine performed poorly because the magnetic fields sent electrons into the walls, driving up conduction losses. These losses were attributed to poor electron injection. The US Navy began providing low-level funding to the project in 1992. Krall published results in 1994.
Bussard, who had been an advocate for Tokamak research, turned to advocate for this concept, so that the idea became associated with his name. In 1995 he sent a letter to the US Congress stating that he had only supported Tokamaks in order to get fusion research sponsored by the government, but he now believed that there were better alternatives.
EMC2, Inc.
Bussard founded Energy/Matter Conversion Corporation, Inc. (aka EMC2) in 1985 and after the HEPS program ended, the company continued its research. Successive machines were made, evolving from WB-1 to WB-8. The company won an SBIR I grant in 1992–93 and an SBIR II grant in 1994–95, both from the US Navy. In 1993, it received a grant from the Electric Power Research Institute. In 1994, The company received small grants from NASA and LANL. Starting in 1999, the company was primarily funded by the US Navy.
WB-1 had six conventional magnets in a cube. This device was 10 cm across. WB-2 used coils of wires to generate the magnetic field. Each electromagnet had a square cross section that created problems. The magnetic fields drove electrons into the metal rings, raising conduction losses and electron trapping. This design also suffered from "funny cusp" losses at the joints between magnets. WB-6 attempted to address these problems, by using circular rings and spacing further apart. The next device, PXL-1, was built in 1996 and 1997. This machine was 26 cm across and used flatter rings to generate the field. From 1998 to 2005 the company built a succession of six machines: WB-3, MPG-1,2, WB-4, PZLx-1, MPG-4 and WB-5. All of these reactors were six magnet designs built as a cube or truncated cube. They ranged from 3 to 40 cm in radius.
Initial difficulties in spherical electron confinement led to the 2005 research project's termination. However, Bussard reported a fusion rate of 109 per second running D-D fusion reactions at only 12.5 kV (based on detecting nine neutrons in five tests, giving a wide confidence interval). He stated that the fusion rate achieved by WB-6 was roughly 100,000 times greater than what Farnsworth achieved at similar well depth and drive conditions. By comparison, researchers at University of Wisconsin–Madison reported a neutron rate of up to 5×109 per second at voltages of 120 kV from an electrostatic fusor without magnetic fields.
Bussard asserted, by using superconductor coils, that the only significant energy loss channel is through electron losses proportional to the surface area. He also stated that the density would scale with the square of the field (constant beta conditions), and the maximum attainable magnetic field would scale with the radius. Under those conditions, the fusion power produced would scale with the seventh power of the radius, and the energy gain would scale with the fifth power. While Bussard did not publicly document the reasoning underlying this estimate, if true, it would enable a model only ten times larger to be useful as a fusion power plant.
WB-6
Funding became tighter and tighter. According to Bussard, "The funds were clearly needed for the more important War in Iraq." An extra $900k of Office of Naval Research funding allowed the program to continue long enough to reach WB-6 testing in November 2005. WB-6 had rings with circular cross sections that space apart at the joints. This reduced the metal surface area unprotected by magnetic fields. These changes dramatically improved system performance, leading to more electron recirculation and better electron confinement, in a progressively tighter core. This machine produced a fusion rate of 109 per second. This is based on a total of nine neutrons in five tests, giving a wide confidence interval. Drive voltage on the WB-6 tests was about 12.5 kV, with a resulting potential well depth of about 10 kV. Thus deuterium ions could have a maximum of 10 keV of kinetic energy in the center. By comparison, a Fusor running deuterium fusion at 10 kV would produce a fusion rate almost too small to detect. Hirsch reported a fusion rate this high only by driving his machine with a 150 kV drop between the inside and outside cages. Hirsch also used deuterium and tritium, a much easier fuel to fuse, because it has a higher nuclear cross section.
While the WB-6 pulses were sub-millisecond, Bussard felt the physics should represent steady state. A last-minute test of WB-6 ended prematurely when the insulation on one of the hand-wound electromagnets burned through, destroying the device.
Efforts to restart funding
With no more funding during 2006, the project was stalled. This ended the US Navy's 11-year embargo on publication and publicizing between 1994 and 2005. The company's military-owned equipment was transferred to SpaceDev, which hired three of the team's researchers. After the transfer, Bussard tried to attract new investors, giving talks trying to raise interest in his design. He gave a talk at Google entitled, "Should Google Go Nuclear?" He also presented and published an overview at the 57th International Astronautical Congress in October 2006. He presented at an internal Yahoo! Tech Talk on April 10, 2007. and spoke on the internet talk radio show The Space Show on May 8, 2007. Bussard had plans for WB-8 that was a higher-order polyhedron, with 12 electromagnets. However, this design was not used in the actual WB-8 machine.
Bussard believed that the WB-6 machine had demonstrated progress and that no intermediate-scale models would be needed. He noted, "We are probably the only people on the planet who know how to make a real net power clean fusion system" He proposed to rebuild WB-6 more robustly to verify its performance. After publishing the results, he planned to convene a conference of experts in the field in an attempt to get them behind his design. The first step in that plan was to design and build two more small scale designs (WB-7 and WB-8) to determine which full scale machine would be best. He wrote "The only small scale machine work remaining, which can yet give further improvements in performance, is test of one or two WB-6-scale devices but with "square" or polygonal coils aligned approximately (but slightly offset on the main faces) along the edges of the vertices of the polyhedron. If this is built around a truncated dodecahedron, near-optimum performance is expected; about 3–5 times better than WB-6." Bussard died on October 6, 2007, from multiple myeloma at age 79.
In 2007, Steven Chu, Nobel laureate and former United States Secretary of Energy, answered a question about polywell at a tech talk at Google. He said: "So far, there's not enough information so [that] I can give an evaluation of the probability that it might work or not...But I'm trying to get more information."
Bridge funding 2007–09
Reassembling team
In August 2007, EMC2 received a $1.8M U.S. Navy contract. Before Bussard's death in October, 2007, Dolly Gray, who co-founded EMC2 with Bussard and served as its president and CEO, helped assemble scientists in Santa Fe to carry on. The group was led by Richard Nebel and included Princeton trained physicist Jaeyoung Park. Both physicists were on leave from LANL. The group also included Mike Wray, the physicist who ran the key 2005 tests; and Kevin Wray, the computer specialist for the operation.
WB-7
WB-7 was constructed in San Diego and shipped to the EMC2 testing facility. The device was termed WB-7 and like prior editions, was designed by engineer Mike Skillicorn. This machine has a design similar to WB-6. WB-7 achieved "1st plasma" in early January, 2008. In August 2008, the team finished the first phase of their experiment and submitted the results to a peer review board. Based on this review, federal funders agreed the team should proceed to the next phase. Nebel said "we have had some success", referring to the team's effort to reproduce the promising results obtained by Bussard. "It's kind of a mix", Nebel reported. "We're generally happy with what we've been getting out of it, and we've learned a tremendous amount" he also said.
2008
In September 2008 the Naval Air Warfare Center publicly pre-solicited a contract for research on an Electrostatic "Wiffle Ball" Fusion Device. In October 2008 the US Navy publicly pre-solicited two more contracts with EMC2 the preferred supplier. These two tasks were to develop better instrumentation and to develop an ion injection gun. In December 2008, following many months of review by the expert review panel of the submission of the final WB-7 results, Nebel commented that "There's nothing in [the research] that suggests this will not work", but "That's a very different statement from saying that it will work."
2009 to 2014
2009
In January 2009 the Naval Air Warfare Center pre-solicited another contract for "modification and testing of plasma wiffleball 7" that appeared to be funding to install the instrumentation developed in a prior contract, install a new design for the connector (joint) between coils, and operate the modified device. The modified unit was called WB-7.1. This pre-solicitation started as a $200k contract but the final award was for $300k. In April 2009, DoD published a plan to provide EMC2 a further $2 million as part of the American Recovery and Reinvestment Act of 2009. The citation in the legislation was labelled as Plasma Fusion (Polywell) – Demonstrate fusion plasma confinement system for shore and shipboard applications; Joint OSD/USN project. The Recovery Act funded the Navy for $7.86M to construct and test a WB-8. The Navy contract had an option for an additional $4.46M. The new device increased the magnetic field strength eightfold over WB-6.
2010
The team built WB-8 and the computational tools to analyze and understand the data from it. The team relocated to San Diego.
2011
Jaeyoung Park became president. In a May interview, Park commented that "This machine [WB8] should be able to generate 1,000 times more nuclear activity than WB-7, with about eight times more magnetic field" The first WB-8 plasma was generated on November 1, 2010. By the third quarter over 500 high power plasma shots had been conducted.
2012
As of August 15, the Navy agreed to fund EMC2 with an additional $5.3 million over 2 years to work on pumping electrons into the wiffleball. They planned to integrate a pulsed power supply to support the electron guns (100+A, 10kV). WB-8 operated at 0.8 Tesla. Review of the work produced the recommendation to continue and expand the effort, stating: "The experimental results to date were consistent with the underlying theoretical framework of the polywell fusion concept and, in the opinion of the committee, merited continuation and expansion."
Going public
2014
In June EMC2 demonstrated for the first time that the electron cloud becomes diamagnetic in the center of a magnetic cusp configuration when beta is high, resolving an earlier conjecture. Whether the plasma is thermalized remains to be demonstrated experimentally. Park presented these findings at various universities, the Annual 2014 Fusion Power Associates meeting and the 2014 IEC conference.
2015
On January 22, EMC2 presented at Microsoft Research. EMC2 planned a three-year, $30 million commercial research program to prove that the Polywell can work. On March 11, the company filed a patent application that refined the ideas in Bussard's 1985 patent. The article "High-Energy Electron Confinement in a Magnetic Cusp Configuration" was published in Physical Review X.
2016
On April 13, Next Big Future published an article on information of the Wiffle Ball reactor dated to 2013 through the Freedom of Information Act.
On May 2, Jaeyoung Park delivered a lecture at Khon Kaen University in Thailand, claiming that the world has so underestimated the timetable and impact that practical and economic fusion power will have, that its ultimate arrival will be highly disruptive. Park stated that he expected to present "final scientific proof of principle for the polywell technology around 2019-2020", and expects "a first generation commercial fusion reactor being developed by 2030 and then mass production and commercialisation of the technology in the 2030s. This is approximately 30 years faster than expected by the International Thermonuclear Energy Reactor (ITER) project. It would also be tens of billions of dollars cheaper."
2018
In May 2018 Park and Nicholas Krall filed WIPO Patent WO/2018/208953. "Generating nuclear fusion reactions with the use of ion beam injection in high pressure magnetic cusp devices," which described the polywell device in detail.
University of Sydney experiments
In June 2019, the results of long-running experiments at the University of Sydney (USyd) were published in PhD thesis form by Richard Bowden-Reid. Using an experimental machine built at the university, the team probed the formation of the virtual electrodes.
Their work demonstrated that little or no trace of virtual electrode formation could be found. This left a mystery; both their machine and previous experiments showed clear and consistent evidence of the formation of a potential well that was trapping ions, which was previously ascribed to the formation of the electrodes. Exploring this problem, Bowden-Reid developed new field equations for the device that explained the potential well without electrode formation, and demonstrated that this matched both their results and those of previous experiments.
Further, exploring the overall mechanism of the virtual electrode concept demonstrated that its interactions with the ions and itself would make it "leak" at a furious rate. Assuming plasma densities and energies required for net energy production, it was calculated that new electrons would have to be supplied at an unfeasible rate of 200,000 amps.
Related projects
Prometheus Fusion Perfection
Mark Suppes built a polywell in Brooklyn. He was the first amateur to detect electron trapping using a Langmuir probe inside a polywell. He presented at the 2012 LIFT conference and the 2012 WIRED conference. The project officially ended in July 2013 due to a lack of funding.
University of Sydney
The University of Sydney in Australia conducted polywell experiments, leading to five papers in Physics of Plasmas. They also published two PhD theses and presented their work at IEC Fusion conferences.
A May 2010 paper discussed a small device's ability to capture electrons. The paper posited that the machine had an ideal magnetic field strength that maximized its ability to catch electrons. The paper analyzed polywell magnetic confinement using analytical solutions and simulations. The work linked the polywell magnetic confinement to magnetic mirror theory. The 2011 work used Particle-in-cell simulations to model particle motion in polywells with a small electron population. Electrons behaved in a similar manner to particles in the biconic cusp.
A 2013 paper measured a negative voltage inside a 4-inch aluminum polywell. Tests included measuring an internal beam of electrons, comparing the machine with and without a magnetic field, measuring the voltage at different locations and comparing voltage changes to the magnetic and electric field strength.
A 2015 paper entitled "Fusion in a magnetically-shielded-grid inertial electrostatic confinement device" presented a theory for a gridded inertial electrostatic confinement (IEC) fusion system that shows a net energy gain is possible if the grid is magnetically shielded from ion impact. The analysis indicated that better than break-even performance is possible even in a deuterium-deuterium system at bench-top scales. The proposed device had the unusual property that it can avoid both the cusp losses of traditional magnetic fusion systems and the grid losses of traditional IEC configurations.
Iranian Nuclear Science and Technology Research Institute
In November 2012, Trend News Agency reported that the Atomic Energy Organization of Iran had allocated "$8 million" to inertial electrostatic confinement research and about half had been spent. The funded group published a paper in the Journal of Fusion Energy, stating that particle-in-cell simulations of a polywell had been conducted. The study suggested that well depths and ion focus control can be achieved by variations of field strength, and referenced older research with traditional fusors. The group had run a fusor in continuous mode at −140 kV and 70 mA of current, with D-D fuel, producing 2×107 neutrons per second.
University of Wisconsin
Researchers performed Vlasov–Poisson, particle-in-cell simulation work on the polywell. This was funded through the National Defense Science and Engineering Graduate Fellowship and was presented at the 2013 American Physical Society conference.
Convergent Scientific, Inc.
Convergent Scientific, Inc. (CSI) is an American company founded in December 2010 and based in Huntington Beach, California. They tested their first polywell design, the Model 1, on steady-state operations from January to late summer 2012. The MaGrid was made of a unique diamond shaped hollow wire, into which an electric current and a liquid coolant flowed. They are making an effort to build a small-scale polywell fusing deuterium. The company filed several patents and in the Fall of 2013, did a series of web-based investor pitches. The presentations mention encountering plasma instabilities including the Diocotron, two stream and Weibel instabilities. The company wants to make and sell Nitrogen-13 for PET scans.
Radiant Matter Research
Radiant Matter is a Dutch organization that has built fusors and has plans to build a polywell.
ProtonBoron
ProtonBoron is an organization that plans to build a proton-boron polywell.
Progressive Fusion Solutions
Progressive Fusion Solutions is an IEC fusion research startup who are researching Fusor and Polywell type devices.
Fusion One Corporation
Fusion One Corporation was a US organization founded by Dr. Paul Sieck (former Lead Physicist of EMC2), Dr. Scott Cornish of the University of Sydney, and Randall Volberg. It ran from 2015 to 2017. They developed a magneto-electrostatic reactor named "F1" that was based in-part on the polywell. It introduced a system of externally mounted electromagnet coils with internally mounted cathode repeller surfaces to provide a means of preserving energy and particle losses that would otherwise be lost through the magnetic cusps. In response to Todd Rider's 1995 power balance conclusions, a new analytical model was developed based on this recovery function as well as a more accurate quantum relativistic treatment of the bremsstrahlung losses that was not present in Rider's analysis. Version 1 of the analytical model was developed by Senior Theoretical Physicist Dr Vladimir Mirnov and demonstrated ample multiples of net gain with D-T and sufficient multiples with D-D to be used for generating electricity. These preliminary results were presented at the ARPA-E ALPHA 2017 Annual Review Meeting. Phase 2 of the model removed key assumptions in the Rider analysis by incorporating a self-consistent treatment of the ion energy distribution (Rider assumed a purely Maxwellian distribution) and the power required to maintain the distribution and ion population. The results yielded an energy distribution that was non-thermal but more Maxwellian than monoenergetic. The input power required to maintain the distribution was calculated to be excessive and ion-ion thermalization was a dominant loss channel. With these additions, a pathway to commercial electricity generation was no longer feasible.
See also
China Fusion Engineering Test Reactor
Dense plasma focus
Fusion Industry Association
Fusion power § History of research
General Fusion
George H. Miley
Inertial electrostatic confinement
List of fusion experiments
Magnetized target fusion
Pinch (plasma physics)
Spherical Tokamak for Energy Production
Stellarator
Timeline of nuclear fusion
Tokamak
TAE Technologies
Z-pinch (zeta pinch)
References
External links
ProtonBoron
Polywell Talk At Microsoft Research
EMC2 website
Polywell Nuclear Fusion
Video of Bussard's presentation to Google
Should Google Go Nuclear?(transcript) Illustrated transcript of Bussard's Google presentation
Robert Bussard on IEC Fusion Power & The Polywell Reactor Transcript of Bussard Polywell Interview from May 10, 2007
Presentation at International Space Development Conference (ISDC). Dallas, May 2007
Links Compendium of informative links related to polywell fusion
List of technical papers and references
Graphical explanation of a polywell
Talk-Polywell.org BBS for discussing polywell
University of Wisconsin–Madison Introduction to IEC including the polywell
Latest Fusion developments (WB-7 – June 2008) based on the work of Dr. Robert Bussard
Prometheus Fusion – A blog describing amateur experiments aimed at creating a polywell
Progressive Fusion Solutions - developing fusion with a fresh outlook
The Polywell Blog – An amateur blog discussing the polywell
– Mark Suppes talk at Wired 2012 on the polywell
2015 Jaeyoung Park video
Fusion power
Soviet inventions | Polywell | [
"Physics",
"Chemistry"
] | 8,418 | [
"Nuclear fusion",
"Fusion power",
"Plasma physics"
] |
8,152,214 | https://en.wikipedia.org/wiki/Asymmetric%20catalytic%20oxidation | Asymmetric catalytic oxidation is a technique of oxidizing various substrates to give an enantio-enriched product using a catalyst. Typically, but not necessarily, asymmetry is induced by the chirality of the catalyst. Typically, but again not necessarily, the methodology applies to organic substrates. Functional groups that can be prochiral and readily susceptible to oxidation include certain alkenes and thioethers. Challenging but pervasive prochiral substrates are C-H bonds of alkanes. Instead of introducing oxygen, some catalysts, biological and otherwise, enantioselectively introduce halogens, another form of oxidation.
Reactions according to substrate
Hydrocarbons
Typically a prochiral C-H bond is converted to a chiral alcohol. Many examples of this important reaction result from the action of cytochrome P450, which allows these enzymes to process prodrugs and xenobiotics. Alpha-ketoglutarate-dependent hydroxylases also catalyze hydroxylations.
Alkenes
The oxidation of alkenes has attracted much attention. Asymmetric epoxidation is often feasible. One named reaction is the Jacobsen epoxidation, which uses manganese-salen complex as a chiral catalyst and NaOCl as the oxidant. The Sharpless epoxidation using chiral N-heterocyclic ligands and osmium tetroxide. Instead of asymmetric epoxidation, alkenes are susceptible to asymmetric dihydroxylation. The method is especially applicable to allyl alcohols using a catalyst derived from titanium isopropoxide and diethyl tartrate. tert-Butyl hydroperoxide is the oxidant. This conversion, the Sharpless asymmetric dihydroxylation, was recognized by a Nobel Prize. Metal-free asymmetric olefin oxidation have been developed. For example, the Shi epoxidation of alkenes using oxone can be made asymmetric using a fructose-derived catalyst.
Sulfur compounds
The enantioselective oxidation of unsymmetrical thioethers to sulfoxides is a well established. The common over the counter medication Esomeprazole (brandname: Nexium) involves such an asymmetric oxidation as its final step. Even disulfides are susceptible to oxidation to chiral thiosulfinites.
References
Catalysis | Asymmetric catalytic oxidation | [
"Chemistry"
] | 514 | [
"Catalysis",
"Chemical kinetics"
] |
8,152,998 | https://en.wikipedia.org/wiki/Work%20output | In physics, work output is the work done by a simple machine, compound machine, or any type of engine model. In common terms, it is the energy output, which for simple machines is always less than the energy input, even though the forces may be drastically different.
In [thermodynamics], work output can refer to the thermodynamic work done by a heat engine, in which case the amount of work output must be less than the input as energy is lost to heat, as determined by the engine's efficiency.
References
Thermodynamics | Work output | [
"Physics",
"Chemistry",
"Mathematics"
] | 120 | [
"Thermodynamics stubs",
"Physical chemistry stubs",
"Thermodynamics",
"Dynamical systems"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.